public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization
@ 2026-02-12 15:58 Jim Mattson
  2026-02-12 15:58 ` [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode Jim Mattson
                   ` (7 more replies)
  0 siblings, 8 replies; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

Currently, KVM's implementation of nested SVM treats the PAT MSR the same
way whether or not nested NPT is enabled: L1 and L2 share a single
PAT. However, the APM specifies that when nested NPT is enabled, the host
(L1) and the guest (L2) should have independent PATs: hPAT for L1 and gPAT
for L2. This patch series implements the architectural specification in
KVM.

Use the existing PAT MSR (vcpu->arch.pat) for hPAT. Add a new field,
svm->nested.gpat, for gPAT. With nested NPT enabled, redirect guest
accesses to the IA32_PAT MSR to gPAT. All other accesses, including
userspace accesses via KVM_{GET,SET}_MSRS, continue to reference hPAT.  The
special handling of userspace accesses ensures save/restore forward
compatibility (i.e. resuming a new checkpoint on an older kernel). When an
old kernel restores a checkpoint from a new kernel, the gPAT will be lost,
and L2 will simply use L1's PAT, which is the existing behavior of the old
kernel anyway.

v1: https://lore.kernel.org/kvm/20260113003016.3511895-1-jmattson@google.com/
v2: https://lore.kernel.org/kvm/20260115232154.3021475-1-jmattson@google.com/
v3: https://lore.kernel.org/kvm/20260205214326.1029278-1-jmattson@google.com/

  v3 -> v4:
   * Rebase on top of Yosry's v5 "Nested SVM fixes, cleanups, and hardening"
   * Rename the svm_set_vmcb_gpat() helper to vmcb_set_gpat() for
     consistency with other VMCB helpers [Yosry].
   * Cache g_pat within struct vmcb_save_area_cached (as
     svm->nested.save.g_pat) instead of using a standalone field in
     svm->nested [Sean].
   * Update nested_vmcb_check_save() to optionally validate the cached
     g_pat, depending on a new boolean argument [Yosry].
   * Reduce indentation in nested_vmcb02_prepare_save() when setting the
     guest PAT [Sean].
    

Jim Mattson (8):
  KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest
    mode
  KVM: x86: nSVM: Cache and validate vmcb12 g_pat
  KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT
  KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT
  KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE
  KVM: x86: nSVM: Handle restore of legacy nested state
  KVM: selftests: nSVM: Add svm_nested_pat test

 arch/x86/include/uapi/asm/kvm.h               |   5 +
 arch/x86/kvm/svm/nested.c                     |  60 +++-
 arch/x86/kvm/svm/svm.c                        |  40 ++-
 arch/x86/kvm/svm/svm.h                        |  38 ++-
 tools/testing/selftests/kvm/Makefile.kvm      |   1 +
 .../selftests/kvm/x86/svm_nested_pat_test.c   | 298 ++++++++++++++++++
 6 files changed, 413 insertions(+), 29 deletions(-)
 create mode 100644 tools/testing/selftests/kvm/x86/svm_nested_pat_test.c

-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:17   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat Jim Mattson
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

When running an L2 guest and writing to MSR_IA32_CR_PAT, the host PAT value
is stored in both vmcb01's g_pat field and vmcb02's g_pat field, but the
clean bit was only being cleared for vmcb02.

Introduce the helper vmcb_set_gpat() which sets vmcb->save.g_pat and marks
the VMCB dirty for VMCB_NPT. Use this helper in both svm_set_msr() for
updating vmcb01 and in nested_vmcb02_compute_g_pat() for updating vmcb02,
ensuring both VMCBs' NPT fields are properly marked dirty.

Fixes: 4995a3685f1b ("KVM: SVM: Use a separate vmcb for the nested L2 guest")
Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c | 2 +-
 arch/x86/kvm/svm/svm.c    | 3 +--
 arch/x86/kvm/svm/svm.h    | 9 +++++----
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index d80b1bde6630..b72a1f3c4144 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -707,7 +707,7 @@ void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
 		return;
 
 	/* FIXME: merge g_pat from vmcb01 and vmcb12.  */
-	svm->nested.vmcb02.ptr->save.g_pat = svm->vmcb01.ptr->save.g_pat;
+	vmcb_set_gpat(svm->nested.vmcb02.ptr, svm->vmcb01.ptr->save.g_pat);
 }
 
 static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 364915f42e13..529cbac57814 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2924,10 +2924,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 		if (ret)
 			break;
 
-		svm->vmcb01.ptr->save.g_pat = data;
+		vmcb_set_gpat(svm->vmcb01.ptr, data);
 		if (is_guest_mode(vcpu))
 			nested_vmcb02_compute_g_pat(svm);
-		vmcb_mark_dirty(svm->vmcb, VMCB_NPT);
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 0bb93879abfe..9850ed01e16e 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -434,14 +434,15 @@ static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
 	vmcb->control.clean &= ~(1 << bit);
 }
 
-static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
+static inline bool vmcb12_is_dirty(struct vmcb_ctrl_area_cached *control, int bit)
 {
-        return !test_bit(bit, (unsigned long *)&vmcb->control.clean);
+	return !test_bit(bit, (unsigned long *)&control->clean);
 }
 
-static inline bool vmcb12_is_dirty(struct vmcb_ctrl_area_cached *control, int bit)
+static inline void vmcb_set_gpat(struct vmcb *vmcb, u64 data)
 {
-	return !test_bit(bit, (unsigned long *)&control->clean);
+	vmcb->save.g_pat = data;
+	vmcb_mark_dirty(vmcb, VMCB_NPT);
 }
 
 static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
  2026-02-12 15:58 ` [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:22   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 3/8] KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT Jim Mattson
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

Cache g_pat from vmcb12 in vmcb_save_area_cached to avoid TOCTTOU issues,
and add a validity check so that when nested paging is enabled for vmcb12,
an invalid g_pat at emulated VMRUN causes an immediate VMEXIT with exit
code VMEXIT_INVALID, as specified in the APM, volume 2: "Nested Paging and
VMRUN/VMEXIT."

Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c | 17 +++++++++++++----
 arch/x86/kvm/svm/svm.h    |  1 +
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index b72a1f3c4144..91b35adb83f8 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -426,7 +426,8 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
 
 /* Common checks that apply to both L1 and L2 state.  */
 static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu,
-				   struct vmcb_save_area_cached *save)
+				   struct vmcb_save_area_cached *save,
+				   bool check_gpat)
 {
 	if (CC(!(save->efer & EFER_SVME)))
 		return false;
@@ -462,6 +463,9 @@ static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu,
 	if (CC(!kvm_valid_efer(vcpu, save->efer)))
 		return false;
 
+	if (check_gpat && CC(!kvm_pat_valid(save->g_pat)))
+		return false;
+
 	return true;
 }
 
@@ -573,6 +577,7 @@ static void __nested_copy_vmcb_save_to_cache(struct vmcb_save_area_cached *to,
 
 	to->rax = from->rax;
 	to->cr2 = from->cr2;
+	to->g_pat = from->g_pat;
 
 	svm_copy_lbrs(to, from);
 }
@@ -1036,7 +1041,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, bool from_vmrun)
 
 	enter_guest_mode(vcpu);
 
-	if (!nested_vmcb_check_save(vcpu, &svm->nested.save) ||
+	if (!nested_vmcb_check_save(vcpu, &svm->nested.save,
+				    nested_npt_enabled(svm)) ||
 	    !nested_vmcb_check_controls(vcpu, &svm->nested.ctl,
 					svm->vmcb01.ptr->save.cr0))
 		return -EINVAL;
@@ -2006,13 +2012,16 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 
 	/*
 	 * Validate host state saved from before VMRUN (see
-	 * nested_svm_check_permissions).
+	 * nested_svm_check_permissions). Note that the g_pat field is not
+	 * validated, because (a) it may have been clobbered by SMM before
+	 * KVM_GET_NESTED_STATE, and (b) it is not loaded at emulated
+	 * #VMEXIT.
 	 */
 	__nested_copy_vmcb_save_to_cache(&save_cached, save);
 	if (!(save->cr0 & X86_CR0_PG) ||
 	    !(save->cr0 & X86_CR0_PE) ||
 	    (save->rflags & X86_EFLAGS_VM) ||
-	    !nested_vmcb_check_save(vcpu, &save_cached))
+	    !nested_vmcb_check_save(vcpu, &save_cached, false))
 		goto out_free;
 
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 9850ed01e16e..a49c48459e0b 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -161,6 +161,7 @@ struct vmcb_save_area_cached {
 	u64 isst_addr;
 	u64 rax;
 	u64 cr2;
+	u64 g_pat;
 	u64 dbgctl;
 	u64 br_from;
 	u64 br_to;
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 3/8] KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
  2026-02-12 15:58 ` [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode Jim Mattson
  2026-02-12 15:58 ` [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:27   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT Jim Mattson
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

When nested NPT is enabled in vmcb12, copy the (cached and validated)
vmcb12 g_pat field to the guest PAT register. Under KVM, the guest PAT
register lives in svm->nested.save.g_pat.

When NPT is enabled, but nested NPT is disabled, copy L1's IA32_PAT MSR to
the vmcb02 g_pat field, since L2 shares the IA32_PAT MSR with L1.

When NPT is disabled, the g_pat field is ignored by hardware.

Fixes: 15038e147247 ("KVM: SVM: obey guest PAT")
Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 91b35adb83f8..dc8275837120 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -724,9 +724,6 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
 	struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
 	struct kvm_vcpu *vcpu = &svm->vcpu;
 
-	nested_vmcb02_compute_g_pat(svm);
-	vmcb_mark_dirty(vmcb02, VMCB_NPT);
-
 	/* Load the nested guest state */
 	if (svm->nested.vmcb12_gpa != svm->nested.last_vmcb12_gpa) {
 		new_vmcb12 = true;
@@ -757,6 +754,13 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
 		vmcb_mark_dirty(vmcb02, VMCB_CET);
 	}
 
+	if (nested_npt_enabled(svm)) {
+		if (unlikely(new_vmcb12 || vmcb12_is_dirty(control, VMCB_NPT)))
+			vmcb_set_gpat(vmcb02, svm->nested.save.g_pat);
+	} else if (npt_enabled) {
+		vmcb_set_gpat(vmcb02, vcpu->arch.pat);
+	}
+
 	kvm_set_rflags(vcpu, save->rflags | X86_EFLAGS_FIXED);
 
 	svm_set_efer(vcpu, svm->nested.save.efer);
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
                   ` (2 preceding siblings ...)
  2026-02-12 15:58 ` [PATCH v4 3/8] KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:30   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 5/8] KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT Jim Mattson
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

When the vCPU is in guest mode with nested NPT enabled, guest accesses to
IA32_PAT are redirected to the gPAT register, which is stored in
svm->nested.save.g_pat.

Non-guest accesses (e.g. from userspace) to IA32_PAT are always redirected
to hPAT, which is stored in vcpu->arch.pat.

This is architected behavior. It also makes it possible to restore a new
checkpoint on an old kernel with reasonable semantics. After the restore,
gPAT will be lost, and L2 will run on L1's PAT. Note that the old kernel
would have always run L2 on L1's PAT.

Add WARN_ON_ONCE to both svm_get_msr() and svm_set_msr() to flag any
host-initiated accesses originating from KVM itself rather than userspace.

Fixes: 15038e147247 ("KVM: SVM: obey guest PAT")
Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c |  9 ---------
 arch/x86/kvm/svm/svm.c    | 37 ++++++++++++++++++++++++++++++-------
 arch/x86/kvm/svm/svm.h    | 17 ++++++++++++++++-
 3 files changed, 46 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index dc8275837120..69b577a4915c 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -706,15 +706,6 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3,
 	return 0;
 }
 
-void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
-{
-	if (!svm->nested.vmcb02.ptr)
-		return;
-
-	/* FIXME: merge g_pat from vmcb01 and vmcb12.  */
-	vmcb_set_gpat(svm->nested.vmcb02.ptr, svm->vmcb01.ptr->save.g_pat);
-}
-
 static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
 {
 	struct vmcb_ctrl_area_cached *control = &svm->nested.ctl;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 529cbac57814..205bf07896ad 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2837,6 +2837,21 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 	case MSR_AMD64_DE_CFG:
 		msr_info->data = svm->msr_decfg;
 		break;
+	case MSR_IA32_CR_PAT:
+		/*
+		 * When nested NPT is enabled, L2 has a separate PAT from
+		 * L1.  Guest accesses to IA32_PAT while running L2 target
+		 * L2's gPAT; host-initiated accesses always target L1's
+		 * hPAT for backward and forward KVM_GET_MSRS compatibility
+		 * with older kernels.
+		 */
+		WARN_ON_ONCE(msr_info->host_initiated && vcpu->wants_to_run);
+		if (!msr_info->host_initiated && is_guest_mode(vcpu) &&
+		    nested_npt_enabled(svm))
+			msr_info->data = svm->nested.save.g_pat;
+		else
+			msr_info->data = vcpu->arch.pat;
+		break;
 	default:
 		return kvm_get_msr_common(vcpu, msr_info);
 	}
@@ -2920,13 +2935,21 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 
 		break;
 	case MSR_IA32_CR_PAT:
-		ret = kvm_set_msr_common(vcpu, msr);
-		if (ret)
-			break;
-
-		vmcb_set_gpat(svm->vmcb01.ptr, data);
-		if (is_guest_mode(vcpu))
-			nested_vmcb02_compute_g_pat(svm);
+		if (!kvm_pat_valid(data))
+			return 1;
+		/*
+		 * When nested NPT is enabled, L2 has a separate PAT from
+		 * L1.  Guest accesses to IA32_PAT while running L2 target
+		 * L2's gPAT; host-initiated accesses always target L1's
+		 * hPAT for backward and forward KVM_SET_MSRS compatibility
+		 * with older kernels.
+		 */
+		WARN_ON_ONCE(msr->host_initiated && vcpu->wants_to_run);
+		if (!msr->host_initiated && is_guest_mode(vcpu) &&
+		    nested_npt_enabled(svm))
+			svm_set_gpat(svm, data);
+		else
+			svm_set_hpat(svm, data);
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index a49c48459e0b..88549705133f 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -607,6 +607,22 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
 	return svm->nested.ctl.misc_ctl & SVM_MISC_ENABLE_NP;
 }
 
+static inline void svm_set_gpat(struct vcpu_svm *svm, u64 data)
+{
+	svm->nested.save.g_pat = data;
+	vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
+}
+
+static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
+{
+	svm->vcpu.arch.pat = data;
+	if (npt_enabled) {
+		vmcb_set_gpat(svm->vmcb01.ptr, data);
+		if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
+			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
+	}
+}
+
 static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
 {
 	return guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_VNMI) &&
@@ -840,7 +856,6 @@ void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm,
 void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm,
 				    struct vmcb_save_area *save);
 void nested_sync_control_from_vmcb02(struct vcpu_svm *svm);
-void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm);
 void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);
 
 extern struct kvm_x86_nested_ops svm_nested_ops;
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 5/8] KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
                   ` (3 preceding siblings ...)
  2026-02-12 15:58 ` [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:33   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 6/8] KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE Jim Mattson
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

According to the APM volume 3 pseudo-code for "VMRUN," when nested paging
is enabled in the vmcb, the guest PAT register (gPAT) is saved to the vmcb
on emulated VMEXIT.

When nested NPT is enabled, save the vmcb02 g_pat field to the vmcb12 g_pat
field on emulated VMEXIT.

Fixes: 15038e147247 ("KVM: SVM: obey guest PAT")
Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 69b577a4915c..26f758e294ab 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1312,6 +1312,9 @@ void nested_svm_vmexit(struct vcpu_svm *svm)
 	vmcb12->save.dr6    = svm->vcpu.arch.dr6;
 	vmcb12->save.cpl    = vmcb02->save.cpl;
 
+	if (nested_npt_enabled(svm))
+		vmcb12->save.g_pat = vmcb02->save.g_pat;
+
 	if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
 		vmcb12->save.s_cet	= vmcb02->save.s_cet;
 		vmcb12->save.isst_addr	= vmcb02->save.isst_addr;
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 6/8] KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
                   ` (4 preceding siblings ...)
  2026-02-12 15:58 ` [PATCH v4 5/8] KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:36   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 7/8] KVM: x86: nSVM: Handle restore of legacy nested state Jim Mattson
  2026-02-12 15:58 ` [PATCH v4 8/8] KVM: selftests: nSVM: Add svm_nested_pat test Jim Mattson
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

Add a 'flags' field to the SVM nested state header, and use bit 0 of the
flags to indicate that gPAT is stored in the nested state.

If in guest mode with NPT enabled, store the current vmcb->save.g_pat value
into the header of the nested state, and set the flag.

Note that struct kvm_svm_nested_state_hdr is included in a union padded to
120 bytes, so there is room to add the flags field and the gpat field
without changing any offsets.

Fixes: cc440cdad5b7 ("KVM: nSVM: implement KVM_GET_NESTED_STATE and KVM_SET_NESTED_STATE")
Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/include/uapi/asm/kvm.h |  5 +++++
 arch/x86/kvm/svm/nested.c       | 16 ++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 846a63215ce1..664d04d1db3f 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -495,6 +495,8 @@ struct kvm_sync_regs {
 
 #define KVM_STATE_VMX_PREEMPTION_TIMER_DEADLINE	0x00000001
 
+#define KVM_STATE_SVM_VALID_GPAT	0x00000001
+
 /* vendor-independent attributes for system fd (group 0) */
 #define KVM_X86_GRP_SYSTEM		0
 #  define KVM_X86_XCOMP_GUEST_SUPP	0
@@ -531,6 +533,9 @@ struct kvm_svm_nested_state_data {
 
 struct kvm_svm_nested_state_hdr {
 	__u64 vmcb_pa;
+	__u32 flags;
+	__u32 reserved;
+	__u64 gpat;
 };
 
 /* for KVM_CAP_NESTED_STATE */
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 26f758e294ab..f73f3e586012 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1893,6 +1893,10 @@ static int svm_get_nested_state(struct kvm_vcpu *vcpu,
 	/* First fill in the header and copy it out.  */
 	if (is_guest_mode(vcpu)) {
 		kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb12_gpa;
+		if (nested_npt_enabled(svm)) {
+			kvm_state.hdr.svm.flags |= KVM_STATE_SVM_VALID_GPAT;
+			kvm_state.hdr.svm.gpat = svm->nested.save.g_pat;
+		}
 		kvm_state.size += KVM_STATE_NESTED_SVM_VMCB_SIZE;
 		kvm_state.flags |= KVM_STATE_NESTED_GUEST_MODE;
 
@@ -2022,6 +2026,14 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 	    !nested_vmcb_check_save(vcpu, &save_cached, false))
 		goto out_free;
 
+	/*
+	 * Validate gPAT, if provided. This is done separately from the
+	 * vmcb_save_area_cached validation above, because gPAT is L2
+	 * state, but the vmcb_save_area_cached is populated with L1 state.
+	 */
+	if ((kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT) &&
+	    !kvm_pat_valid(kvm_state->hdr.svm.gpat))
+		goto out_free;
 
 	/*
 	 * All checks done, we can enter guest mode. Userspace provides
@@ -2061,6 +2073,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 	if (ret)
 		goto out_free;
 
+	if (nested_npt_enabled(svm) &&
+	    (kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT))
+		svm_set_gpat(svm, kvm_state->hdr.svm.gpat);
+
 	svm_switch_vmcb(svm, &svm->nested.vmcb02);
 	nested_vmcb02_prepare_control(svm, svm->vmcb->save.rip, svm->vmcb->save.cs.base);
 
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 7/8] KVM: x86: nSVM: Handle restore of legacy nested state
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
                   ` (5 preceding siblings ...)
  2026-02-12 15:58 ` [PATCH v4 6/8] KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  2026-02-13  0:38   ` Yosry Ahmed
  2026-02-12 15:58 ` [PATCH v4 8/8] KVM: selftests: nSVM: Add svm_nested_pat test Jim Mattson
  7 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

When nested NPT is enabled and KVM_SET_NESTED_STATE is used to restore an
old checkpoint (without a valid gPAT), the current IA32_PAT value must be
used as L2's gPAT.

Unfortunately, checkpoint restore is non-atomic, and the order in which
state components are restored is not specified. Hence, the current IA32_PAT
value may be restored by KVM_SET_MSRS after KVM_SET_NESTED_STATE.  To
further complicate matters, there may be a KVM_GET_NESTED_STATE before the
next KVM_RUN.

Introduce a new boolean, svm->nested.legacy_gpat_semantics. When set, hPAT
updates are also applied to gPAT, preserving the old behavior (i.e. L2
shares L1's PAT). Set this boolean when restoring legacy state (i.e. nested
NPT is enabled, but no GPAT is provided) in KVM_SET_NESTED_STATE. Clear
this boolean in svm_vcpu_pre_run(), to ensure that hPAT and gPAT are
decoupled before the vCPU resumes execution.

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 arch/x86/kvm/svm/nested.c | 11 ++++++++---
 arch/x86/kvm/svm/svm.c    |  2 ++
 arch/x86/kvm/svm/svm.h    | 11 +++++++++++
 3 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index f73f3e586012..d854d29b0bd8 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -2073,9 +2073,14 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 	if (ret)
 		goto out_free;
 
-	if (nested_npt_enabled(svm) &&
-	    (kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT))
-		svm_set_gpat(svm, kvm_state->hdr.svm.gpat);
+	if (nested_npt_enabled(svm)) {
+		if (kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT) {
+			svm_set_gpat(svm, kvm_state->hdr.svm.gpat);
+		} else {
+			svm_set_gpat(svm, vcpu->arch.pat);
+			svm->nested.legacy_gpat_semantics = true;
+		}
+	}
 
 	svm_switch_vmcb(svm, &svm->nested.vmcb02);
 	nested_vmcb02_prepare_control(svm, svm->vmcb->save.rip, svm->vmcb->save.cs.base);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 205bf07896ad..d951d25f1f91 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -4245,6 +4245,8 @@ static int svm_vcpu_pre_run(struct kvm_vcpu *vcpu)
 	if (to_kvm_sev_info(vcpu->kvm)->need_init)
 		return -EINVAL;
 
+	to_svm(vcpu)->nested.legacy_gpat_semantics = false;
+
 	return 1;
 }
 
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 88549705133f..0bb9fdcb489d 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -238,6 +238,15 @@ struct svm_nested_state {
 	 * on its side.
 	 */
 	bool force_msr_bitmap_recalc;
+
+	/*
+	 * Indicates that a legacy nested state (without a valid gPAT) was
+	 * recently restored. Until the next KVM_RUN, updates to hPAT are
+	 * also applied to gPAT, preserving legacy behavior (i.e. L2 shares
+	 * L1's PAT). Because checkpoint restore is non-atomic, this
+	 * complication is necessary for backward compatibility.
+	 */
+	bool legacy_gpat_semantics;
 };
 
 struct vcpu_sev_es_state {
@@ -621,6 +630,8 @@ static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
 		if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
 			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
 	}
+	if (svm->nested.legacy_gpat_semantics)
+		svm_set_gpat(svm, data);
 }
 
 static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v4 8/8] KVM: selftests: nSVM: Add svm_nested_pat test
  2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
                   ` (6 preceding siblings ...)
  2026-02-12 15:58 ` [PATCH v4 7/8] KVM: x86: nSVM: Handle restore of legacy nested state Jim Mattson
@ 2026-02-12 15:58 ` Jim Mattson
  7 siblings, 0 replies; 34+ messages in thread
From: Jim Mattson @ 2026-02-12 15:58 UTC (permalink / raw)
  To: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest, Yosry Ahmed
  Cc: Jim Mattson

Verify that KVM correctly virtualizes the host PAT MSR and the guest PAT
register for nested SVM guests.

With nested NPT disabled:
 * L1 and L2 share the same PAT
 * The vmcb12.g_pat is ignored

With nested NPT enabled:
 * An invalid g_pat in vmcb12 causes VMEXIT_INVALID
 * RDMSR(IA32_PAT) from L2 returns the value of the guest PAT register
 * WRMSR(IA32_PAT) from L2 is reflected in vmcb12's g_pat on VMEXIT
 * RDMSR(IA32_PAT) from L1 returns the value of the host PAT MSR
 * Save/restore with the vCPU in guest mode preserves both hPAT and gPAT

Signed-off-by: Jim Mattson <jmattson@google.com>
---
 tools/testing/selftests/kvm/Makefile.kvm      |   1 +
 .../selftests/kvm/x86/svm_nested_pat_test.c   | 298 ++++++++++++++++++
 2 files changed, 299 insertions(+)
 create mode 100644 tools/testing/selftests/kvm/x86/svm_nested_pat_test.c

diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index 7810f9db5f77..5554e40f73f8 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -110,6 +110,7 @@ TEST_GEN_PROGS_x86 += x86/state_test
 TEST_GEN_PROGS_x86 += x86/vmx_preemption_timer_test
 TEST_GEN_PROGS_x86 += x86/svm_vmcall_test
 TEST_GEN_PROGS_x86 += x86/svm_int_ctl_test
+TEST_GEN_PROGS_x86 += x86/svm_nested_pat_test
 TEST_GEN_PROGS_x86 += x86/svm_nested_shutdown_test
 TEST_GEN_PROGS_x86 += x86/svm_nested_soft_inject_test
 TEST_GEN_PROGS_x86 += x86/svm_lbr_nested_state
diff --git a/tools/testing/selftests/kvm/x86/svm_nested_pat_test.c b/tools/testing/selftests/kvm/x86/svm_nested_pat_test.c
new file mode 100644
index 000000000000..08c1428969b0
--- /dev/null
+++ b/tools/testing/selftests/kvm/x86/svm_nested_pat_test.c
@@ -0,0 +1,298 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KVM nested SVM PAT test
+ *
+ * Copyright (C) 2026, Google LLC.
+ *
+ * Test that KVM correctly virtualizes the PAT MSR and VMCB g_pat field
+ * for nested SVM guests:
+ *
+ * o With nested NPT disabled:
+ *     - L1 and L2 share the same PAT
+ *     - The vmcb12.g_pat is ignored
+ * o With nested NPT enabled:
+ *     - Invalid g_pat in vmcb12 should cause VMEXIT_INVALID
+ *     - L2 should see vmcb12.g_pat via RDMSR, not L1's PAT
+ *     - L2's writes to PAT should be saved to vmcb12 on exit
+ *     - L1's PAT should be restored after #VMEXIT from L2
+ *     - State save/restore should preserve both L1's and L2's PAT values
+ */
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+
+#include "test_util.h"
+#include "kvm_util.h"
+#include "processor.h"
+#include "svm_util.h"
+
+#define L2_GUEST_STACK_SIZE 256
+
+#define PAT_DEFAULT		0x0007040600070406ULL
+#define L1_PAT_VALUE		0x0007040600070404ULL  /* Change PA0 to WT */
+#define L2_VMCB12_PAT		0x0606060606060606ULL  /* All WB */
+#define L2_PAT_MODIFIED		0x0606060606060604ULL  /* Change PA0 to WT */
+#define INVALID_PAT_VALUE	0x0808080808080808ULL  /* 8 is reserved */
+
+/*
+ * Shared state between L1 and L2 for verification.
+ */
+struct pat_test_data {
+	uint64_t l2_pat_read;
+	uint64_t l2_pat_after_write;
+	uint64_t l1_pat_after_vmexit;
+	uint64_t vmcb12_gpat_after_exit;
+	bool l2_done;
+};
+
+static struct pat_test_data *pat_data;
+
+static void l2_guest_code(void)
+{
+	pat_data->l2_pat_read = rdmsr(MSR_IA32_CR_PAT);
+	wrmsr(MSR_IA32_CR_PAT, L2_PAT_MODIFIED);
+	pat_data->l2_pat_after_write = rdmsr(MSR_IA32_CR_PAT);
+	pat_data->l2_done = true;
+	vmmcall();
+}
+
+static void l2_guest_code_saverestoretest(void)
+{
+	pat_data->l2_pat_read = rdmsr(MSR_IA32_CR_PAT);
+
+	GUEST_SYNC(1);
+	GUEST_ASSERT_EQ(rdmsr(MSR_IA32_CR_PAT), pat_data->l2_pat_read);
+
+	wrmsr(MSR_IA32_CR_PAT, L2_PAT_MODIFIED);
+	pat_data->l2_pat_after_write = rdmsr(MSR_IA32_CR_PAT);
+
+	GUEST_SYNC(2);
+	GUEST_ASSERT_EQ(rdmsr(MSR_IA32_CR_PAT), L2_PAT_MODIFIED);
+
+	pat_data->l2_done = true;
+	vmmcall();
+}
+
+static void l2_guest_code_multi_vmentry(void)
+{
+	pat_data->l2_pat_read = rdmsr(MSR_IA32_CR_PAT);
+	wrmsr(MSR_IA32_CR_PAT, L2_PAT_MODIFIED);
+	pat_data->l2_pat_after_write = rdmsr(MSR_IA32_CR_PAT);
+	vmmcall();
+
+	pat_data->l2_pat_read = rdmsr(MSR_IA32_CR_PAT);
+	pat_data->l2_done = true;
+	vmmcall();
+}
+
+static struct vmcb *l1_common_setup(struct svm_test_data *svm,
+				    struct pat_test_data *data,
+				    void *l2_guest_code,
+				    void *l2_guest_stack)
+{
+	struct vmcb *vmcb = svm->vmcb;
+
+	pat_data = data;
+
+	wrmsr(MSR_IA32_CR_PAT, L1_PAT_VALUE);
+	GUEST_ASSERT_EQ(rdmsr(MSR_IA32_CR_PAT), L1_PAT_VALUE);
+
+	generic_svm_setup(svm, l2_guest_code, l2_guest_stack);
+
+	vmcb->save.g_pat = L2_VMCB12_PAT;
+	vmcb->control.intercept &= ~(1ULL << INTERCEPT_MSR_PROT);
+
+	return vmcb;
+}
+
+static void l1_assert_l2_state(struct pat_test_data *data, uint64_t expected_pat_read)
+{
+	GUEST_ASSERT(data->l2_done);
+	GUEST_ASSERT_EQ(data->l2_pat_read, expected_pat_read);
+	GUEST_ASSERT_EQ(data->l2_pat_after_write, L2_PAT_MODIFIED);
+}
+
+static void l1_svm_code_npt_disabled(struct svm_test_data *svm,
+				     struct pat_test_data *data)
+{
+	unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+	struct vmcb *vmcb;
+
+	vmcb = l1_common_setup(svm, data, l2_guest_code,
+			       &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+	run_guest(vmcb, svm->vmcb_gpa);
+
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+	l1_assert_l2_state(data, L1_PAT_VALUE);
+
+	data->l1_pat_after_vmexit = rdmsr(MSR_IA32_CR_PAT);
+	GUEST_ASSERT_EQ(data->l1_pat_after_vmexit, L2_PAT_MODIFIED);
+
+	GUEST_DONE();
+}
+
+static void l1_svm_code_invalid_gpat(struct svm_test_data *svm,
+				     struct pat_test_data *data)
+{
+	unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+	struct vmcb *vmcb;
+
+	vmcb = l1_common_setup(svm, data, l2_guest_code,
+			       &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+	vmcb->save.g_pat = INVALID_PAT_VALUE;
+
+	run_guest(vmcb, svm->vmcb_gpa);
+
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_ERR);
+	GUEST_ASSERT(!data->l2_done);
+
+	GUEST_DONE();
+}
+
+static void l1_svm_code_npt_enabled(struct svm_test_data *svm,
+				    struct pat_test_data *data)
+{
+	unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+	struct vmcb *vmcb;
+
+	vmcb = l1_common_setup(svm, data, l2_guest_code,
+			       &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+	run_guest(vmcb, svm->vmcb_gpa);
+
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+	l1_assert_l2_state(data, L2_VMCB12_PAT);
+
+	data->vmcb12_gpat_after_exit = vmcb->save.g_pat;
+	GUEST_ASSERT_EQ(data->vmcb12_gpat_after_exit, L2_PAT_MODIFIED);
+
+	data->l1_pat_after_vmexit = rdmsr(MSR_IA32_CR_PAT);
+	GUEST_ASSERT_EQ(data->l1_pat_after_vmexit, L1_PAT_VALUE);
+
+	GUEST_DONE();
+}
+
+static void l1_svm_code_saverestore(struct svm_test_data *svm,
+				    struct pat_test_data *data)
+{
+	unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+	struct vmcb *vmcb;
+
+	vmcb = l1_common_setup(svm, data, l2_guest_code_saverestoretest,
+			       &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+	run_guest(vmcb, svm->vmcb_gpa);
+
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+	GUEST_ASSERT(data->l2_done);
+
+	GUEST_ASSERT_EQ(rdmsr(MSR_IA32_CR_PAT), L1_PAT_VALUE);
+	GUEST_ASSERT_EQ(vmcb->save.g_pat, L2_PAT_MODIFIED);
+
+	GUEST_DONE();
+}
+
+static void l1_svm_code_multi_vmentry(struct svm_test_data *svm,
+				      struct pat_test_data *data)
+{
+	unsigned long l2_guest_stack[L2_GUEST_STACK_SIZE];
+	struct vmcb *vmcb;
+
+	vmcb = l1_common_setup(svm, data, l2_guest_code_multi_vmentry,
+			       &l2_guest_stack[L2_GUEST_STACK_SIZE]);
+
+	run_guest(vmcb, svm->vmcb_gpa);
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+
+	GUEST_ASSERT_EQ(data->l2_pat_after_write, L2_PAT_MODIFIED);
+	GUEST_ASSERT_EQ(vmcb->save.g_pat, L2_PAT_MODIFIED);
+	GUEST_ASSERT_EQ(rdmsr(MSR_IA32_CR_PAT), L1_PAT_VALUE);
+
+	vmcb->save.rip += 3;  /* vmmcall */
+	run_guest(vmcb, svm->vmcb_gpa);
+
+	GUEST_ASSERT_EQ(vmcb->control.exit_code, SVM_EXIT_VMMCALL);
+	GUEST_ASSERT(data->l2_done);
+	GUEST_ASSERT_EQ(data->l2_pat_read, L2_PAT_MODIFIED);
+	GUEST_ASSERT_EQ(rdmsr(MSR_IA32_CR_PAT), L1_PAT_VALUE);
+
+	GUEST_DONE();
+}
+
+static void run_test(void *l1_code, const char *test_name, bool npt_enabled,
+		     bool do_save_restore)
+{
+	struct pat_test_data *data_hva;
+	vm_vaddr_t svm_gva, data_gva;
+	struct kvm_x86_state *state;
+	struct kvm_vcpu *vcpu;
+	struct kvm_vm *vm;
+	struct ucall uc;
+
+	pr_info("Testing: %s\n", test_name);
+
+	vm = vm_create_with_one_vcpu(&vcpu, l1_code);
+	if (npt_enabled)
+		vm_enable_npt(vm);
+
+	vcpu_alloc_svm(vm, &svm_gva);
+
+	data_gva = vm_vaddr_alloc_page(vm);
+	data_hva = addr_gva2hva(vm, data_gva);
+	memset(data_hva, 0, sizeof(*data_hva));
+
+	if (npt_enabled)
+		tdp_identity_map_default_memslots(vm);
+
+	vcpu_args_set(vcpu, 2, svm_gva, data_gva);
+
+	for (;;) {
+		vcpu_run(vcpu);
+		TEST_ASSERT_KVM_EXIT_REASON(vcpu, KVM_EXIT_IO);
+
+		switch (get_ucall(vcpu, &uc)) {
+		case UCALL_ABORT:
+			REPORT_GUEST_ASSERT(uc);
+			/* NOT REACHED */
+		case UCALL_SYNC:
+			if (do_save_restore) {
+				pr_info("  Save/restore at sync point %ld\n",
+					uc.args[1]);
+				state = vcpu_save_state(vcpu);
+				kvm_vm_release(vm);
+				vcpu = vm_recreate_with_one_vcpu(vm);
+				vcpu_load_state(vcpu, state);
+				kvm_x86_state_cleanup(state);
+			}
+			break;
+		case UCALL_DONE:
+			pr_info("  PASSED\n");
+			kvm_vm_free(vm);
+			return;
+		default:
+			TEST_FAIL("Unknown ucall %lu", uc.cmd);
+		}
+	}
+}
+
+int main(int argc, char *argv[])
+{
+	TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_SVM));
+	TEST_REQUIRE(kvm_cpu_has(X86_FEATURE_NPT));
+	TEST_REQUIRE(kvm_has_cap(KVM_CAP_NESTED_STATE));
+
+	run_test(l1_svm_code_npt_disabled, "nested NPT disabled", false, false);
+
+	run_test(l1_svm_code_invalid_gpat, "invalid g_pat", true, false);
+
+	run_test(l1_svm_code_npt_enabled, "nested NPT enabled", true, false);
+
+	run_test(l1_svm_code_saverestore, "save/restore", true, true);
+
+	run_test(l1_svm_code_multi_vmentry, "multiple entries", true, false);
+
+	return 0;
+}
-- 
2.53.0.239.g8d8fc8a987-goog


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode
  2026-02-12 15:58 ` [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode Jim Mattson
@ 2026-02-13  0:17   ` Yosry Ahmed
  2026-02-13 15:26     ` Sean Christopherson
  0 siblings, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:17 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:49AM -0800, Jim Mattson wrote:
> When running an L2 guest and writing to MSR_IA32_CR_PAT, the host PAT value
> is stored in both vmcb01's g_pat field and vmcb02's g_pat field, but the
> clean bit was only being cleared for vmcb02.
> 
> Introduce the helper vmcb_set_gpat() which sets vmcb->save.g_pat and marks
> the VMCB dirty for VMCB_NPT. Use this helper in both svm_set_msr() for
> updating vmcb01 and in nested_vmcb02_compute_g_pat() for updating vmcb02,
> ensuring both VMCBs' NPT fields are properly marked dirty.
> 
> Fixes: 4995a3685f1b ("KVM: SVM: Use a separate vmcb for the nested L2 guest")
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
>  arch/x86/kvm/svm/nested.c | 2 +-
>  arch/x86/kvm/svm/svm.c    | 3 +--
>  arch/x86/kvm/svm/svm.h    | 9 +++++----
>  3 files changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index d80b1bde6630..b72a1f3c4144 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -707,7 +707,7 @@ void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
>  		return;
>  
>  	/* FIXME: merge g_pat from vmcb01 and vmcb12.  */
> -	svm->nested.vmcb02.ptr->save.g_pat = svm->vmcb01.ptr->save.g_pat;
> +	vmcb_set_gpat(svm->nested.vmcb02.ptr, svm->vmcb01.ptr->save.g_pat);
>  }
>  
>  static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 364915f42e13..529cbac57814 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2924,10 +2924,9 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  		if (ret)
>  			break;
>  
> -		svm->vmcb01.ptr->save.g_pat = data;
> +		vmcb_set_gpat(svm->vmcb01.ptr, data);
>  		if (is_guest_mode(vcpu))
>  			nested_vmcb02_compute_g_pat(svm);
> -		vmcb_mark_dirty(svm->vmcb, VMCB_NPT);
>  		break;
>  	case MSR_IA32_SPEC_CTRL:
>  		if (!msr->host_initiated &&
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 0bb93879abfe..9850ed01e16e 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -434,14 +434,15 @@ static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
>  	vmcb->control.clean &= ~(1 << bit);
>  }
>  
> -static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)

Huh, I assume the removed of vmcb_is_dirty() was not intentional?

> +static inline bool vmcb12_is_dirty(struct vmcb_ctrl_area_cached *control, int bit)
>  {
> -        return !test_bit(bit, (unsigned long *)&vmcb->control.clean);
> +	return !test_bit(bit, (unsigned long *)&control->clean);
>  }
>  
> -static inline bool vmcb12_is_dirty(struct vmcb_ctrl_area_cached *control, int bit)
> +static inline void vmcb_set_gpat(struct vmcb *vmcb, u64 data)
>  {
> -	return !test_bit(bit, (unsigned long *)&control->clean);
> +	vmcb->save.g_pat = data;
> +	vmcb_mark_dirty(vmcb, VMCB_NPT);
>  }
>  
>  static __always_inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
> -- 
> 2.53.0.239.g8d8fc8a987-goog
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat
  2026-02-12 15:58 ` [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat Jim Mattson
@ 2026-02-13  0:22   ` Yosry Ahmed
  2026-02-20 22:26     ` Jim Mattson
  0 siblings, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:22 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:50AM -0800, Jim Mattson wrote:
> Cache g_pat from vmcb12 in vmcb_save_area_cached to avoid TOCTTOU issues,
> and add a validity check so that when nested paging is enabled for vmcb12,
> an invalid g_pat at emulated VMRUN causes an immediate VMEXIT with exit
> code VMEXIT_INVALID, as specified in the APM, volume 2: "Nested Paging and
> VMRUN/VMEXIT."
> 
> Fixes: 3d6368ef580a ("KVM: SVM: Add VMRUN handler")
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
>  arch/x86/kvm/svm/nested.c | 17 +++++++++++++----
>  arch/x86/kvm/svm/svm.h    |  1 +
>  2 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index b72a1f3c4144..91b35adb83f8 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -426,7 +426,8 @@ static bool nested_vmcb_check_controls(struct kvm_vcpu *vcpu,
>  
>  /* Common checks that apply to both L1 and L2 state.  */
>  static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu,
> -				   struct vmcb_save_area_cached *save)
> +				   struct vmcb_save_area_cached *save,
> +				   bool check_gpat)
>  {
>  	if (CC(!(save->efer & EFER_SVME)))
>  		return false;
> @@ -462,6 +463,9 @@ static bool nested_vmcb_check_save(struct kvm_vcpu *vcpu,
>  	if (CC(!kvm_valid_efer(vcpu, save->efer)))
>  		return false;
>  
> +	if (check_gpat && CC(!kvm_pat_valid(save->g_pat)))
> +		return false;
> +
>  	return true;
>  }
>  
> @@ -573,6 +577,7 @@ static void __nested_copy_vmcb_save_to_cache(struct vmcb_save_area_cached *to,
>  
>  	to->rax = from->rax;
>  	to->cr2 = from->cr2;
> +	to->g_pat = from->g_pat;
>  
>  	svm_copy_lbrs(to, from);
>  }
> @@ -1036,7 +1041,8 @@ int enter_svm_guest_mode(struct kvm_vcpu *vcpu, u64 vmcb12_gpa, bool from_vmrun)
>  
>  	enter_guest_mode(vcpu);
>  
> -	if (!nested_vmcb_check_save(vcpu, &svm->nested.save) ||
> +	if (!nested_vmcb_check_save(vcpu, &svm->nested.save,
> +				    nested_npt_enabled(svm)) ||
>  	    !nested_vmcb_check_controls(vcpu, &svm->nested.ctl,
>  					svm->vmcb01.ptr->save.cr0))
>  		return -EINVAL;
> @@ -2006,13 +2012,16 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>  
>  	/*
>  	 * Validate host state saved from before VMRUN (see
> -	 * nested_svm_check_permissions).
> +	 * nested_svm_check_permissions). Note that the g_pat field is not
> +	 * validated, because (a) it may have been clobbered by SMM before
> +	 * KVM_GET_NESTED_STATE, and (b) it is not loaded at emulated
> +	 * #VMEXIT.

(b) here means that svm_copy_vmrun_state() does not copy it to vmcb01,
and the value is restored by KVM_SET_MSRS, right?

If my understanding is correct:

Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>

>  	 */
>  	__nested_copy_vmcb_save_to_cache(&save_cached, save);
>  	if (!(save->cr0 & X86_CR0_PG) ||
>  	    !(save->cr0 & X86_CR0_PE) ||
>  	    (save->rflags & X86_EFLAGS_VM) ||
> -	    !nested_vmcb_check_save(vcpu, &save_cached))
> +	    !nested_vmcb_check_save(vcpu, &save_cached, false))
>  		goto out_free;
>  
>  
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 9850ed01e16e..a49c48459e0b 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -161,6 +161,7 @@ struct vmcb_save_area_cached {
>  	u64 isst_addr;
>  	u64 rax;
>  	u64 cr2;
> +	u64 g_pat;
>  	u64 dbgctl;
>  	u64 br_from;
>  	u64 br_to;
> -- 
> 2.53.0.239.g8d8fc8a987-goog
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 3/8] KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT
  2026-02-12 15:58 ` [PATCH v4 3/8] KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT Jim Mattson
@ 2026-02-13  0:27   ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:27 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:51AM -0800, Jim Mattson wrote:
> When nested NPT is enabled in vmcb12, copy the (cached and validated)
> vmcb12 g_pat field to the guest PAT register. Under KVM, the guest PAT
> register lives in svm->nested.save.g_pat.
> 
> When NPT is enabled, but nested NPT is disabled, copy L1's IA32_PAT MSR to
> the vmcb02 g_pat field, since L2 shares the IA32_PAT MSR with L1.
> 
> When NPT is disabled, the g_pat field is ignored by hardware.
> 
> Fixes: 15038e147247 ("KVM: SVM: obey guest PAT")
> Signed-off-by: Jim Mattson <jmattson@google.com>

Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>

> ---
>  arch/x86/kvm/svm/nested.c | 10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 91b35adb83f8..dc8275837120 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -724,9 +724,6 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
>  	struct vmcb *vmcb02 = svm->nested.vmcb02.ptr;
>  	struct kvm_vcpu *vcpu = &svm->vcpu;
>  
> -	nested_vmcb02_compute_g_pat(svm);
> -	vmcb_mark_dirty(vmcb02, VMCB_NPT);
> -
>  	/* Load the nested guest state */
>  	if (svm->nested.vmcb12_gpa != svm->nested.last_vmcb12_gpa) {
>  		new_vmcb12 = true;
> @@ -757,6 +754,13 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
>  		vmcb_mark_dirty(vmcb02, VMCB_CET);
>  	}
>  
> +	if (nested_npt_enabled(svm)) {
> +		if (unlikely(new_vmcb12 || vmcb12_is_dirty(control, VMCB_NPT)))
> +			vmcb_set_gpat(vmcb02, svm->nested.save.g_pat);
> +	} else if (npt_enabled) {
> +		vmcb_set_gpat(vmcb02, vcpu->arch.pat);
> +	}
> +
>  	kvm_set_rflags(vcpu, save->rflags | X86_EFLAGS_FIXED);
>  
>  	svm_set_efer(vcpu, svm->nested.save.efer);
> -- 
> 2.53.0.239.g8d8fc8a987-goog
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-12 15:58 ` [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT Jim Mattson
@ 2026-02-13  0:30   ` Yosry Ahmed
  2026-02-13 15:20     ` Sean Christopherson
  0 siblings, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:30 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:52AM -0800, Jim Mattson wrote:
> When the vCPU is in guest mode with nested NPT enabled, guest accesses to
> IA32_PAT are redirected to the gPAT register, which is stored in
> svm->nested.save.g_pat.
> 
> Non-guest accesses (e.g. from userspace) to IA32_PAT are always redirected
> to hPAT, which is stored in vcpu->arch.pat.
> 
> This is architected behavior. It also makes it possible to restore a new
> checkpoint on an old kernel with reasonable semantics. After the restore,
> gPAT will be lost, and L2 will run on L1's PAT. Note that the old kernel
> would have always run L2 on L1's PAT.
> 
> Add WARN_ON_ONCE to both svm_get_msr() and svm_set_msr() to flag any
> host-initiated accesses originating from KVM itself rather than userspace.
> 
> Fixes: 15038e147247 ("KVM: SVM: obey guest PAT")
> Signed-off-by: Jim Mattson <jmattson@google.com>
> ---
>  arch/x86/kvm/svm/nested.c |  9 ---------
>  arch/x86/kvm/svm/svm.c    | 37 ++++++++++++++++++++++++++++++-------
>  arch/x86/kvm/svm/svm.h    | 17 ++++++++++++++++-
>  3 files changed, 46 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index dc8275837120..69b577a4915c 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -706,15 +706,6 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3,
>  	return 0;
>  }
>  
> -void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm)
> -{
> -	if (!svm->nested.vmcb02.ptr)
> -		return;
> -
> -	/* FIXME: merge g_pat from vmcb01 and vmcb12.  */
> -	vmcb_set_gpat(svm->nested.vmcb02.ptr, svm->vmcb01.ptr->save.g_pat);
> -}
> -
>  static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
>  {
>  	struct vmcb_ctrl_area_cached *control = &svm->nested.ctl;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 529cbac57814..205bf07896ad 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -2837,6 +2837,21 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  	case MSR_AMD64_DE_CFG:
>  		msr_info->data = svm->msr_decfg;
>  		break;
> +	case MSR_IA32_CR_PAT:
> +		/*
> +		 * When nested NPT is enabled, L2 has a separate PAT from
> +		 * L1.  Guest accesses to IA32_PAT while running L2 target
> +		 * L2's gPAT; host-initiated accesses always target L1's
> +		 * hPAT for backward and forward KVM_GET_MSRS compatibility
> +		 * with older kernels.
> +		 */
> +		WARN_ON_ONCE(msr_info->host_initiated && vcpu->wants_to_run);
> +		if (!msr_info->host_initiated && is_guest_mode(vcpu) &&
> +		    nested_npt_enabled(svm))
> +			msr_info->data = svm->nested.save.g_pat;
> +		else
> +			msr_info->data = vcpu->arch.pat;
> +		break;
>  	default:
>  		return kvm_get_msr_common(vcpu, msr_info);
>  	}
> @@ -2920,13 +2935,21 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  
>  		break;
>  	case MSR_IA32_CR_PAT:
> -		ret = kvm_set_msr_common(vcpu, msr);
> -		if (ret)
> -			break;
> -
> -		vmcb_set_gpat(svm->vmcb01.ptr, data);
> -		if (is_guest_mode(vcpu))
> -			nested_vmcb02_compute_g_pat(svm);
> +		if (!kvm_pat_valid(data))
> +			return 1;
> +		/*
> +		 * When nested NPT is enabled, L2 has a separate PAT from
> +		 * L1.  Guest accesses to IA32_PAT while running L2 target
> +		 * L2's gPAT; host-initiated accesses always target L1's
> +		 * hPAT for backward and forward KVM_SET_MSRS compatibility
> +		 * with older kernels.
> +		 */
> +		WARN_ON_ONCE(msr->host_initiated && vcpu->wants_to_run);
> +		if (!msr->host_initiated && is_guest_mode(vcpu) &&
> +		    nested_npt_enabled(svm))
> +			svm_set_gpat(svm, data);
> +		else
> +			svm_set_hpat(svm, data);
>  		break;
>  	case MSR_IA32_SPEC_CTRL:
>  		if (!msr->host_initiated &&
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index a49c48459e0b..88549705133f 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -607,6 +607,22 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
>  	return svm->nested.ctl.misc_ctl & SVM_MISC_ENABLE_NP;
>  }
>  
> +static inline void svm_set_gpat(struct vcpu_svm *svm, u64 data)
> +{
> +	svm->nested.save.g_pat = data;
> +	vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> +}
> +
> +static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
> +{
> +	svm->vcpu.arch.pat = data;
> +	if (npt_enabled) {
> +		vmcb_set_gpat(svm->vmcb01.ptr, data);
> +		if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> +			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> +	}
> +}

Is it me, or is it a bit confusing that svm_set_gpat() sets L2's gPAT
not L1's, and svm_set_hpat() calls vmcb_set_gpat()?

"gpat" means different things in the context of the VMCB or otherwise,
which kinda makes sense but is also not super clear. Maybe
svm_set_l1_gpat() and svm_set_l2_gpat() is more clear?

Will leave it up to Sean to decide if the naming is good enough, but
honestly I don't want to stall this series, so hopefully any renames can
be done as a follow up or when the series is applied.

> +
>  static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
>  {
>  	return guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_VNMI) &&
> @@ -840,7 +856,6 @@ void nested_copy_vmcb_control_to_cache(struct vcpu_svm *svm,
>  void nested_copy_vmcb_save_to_cache(struct vcpu_svm *svm,
>  				    struct vmcb_save_area *save);
>  void nested_sync_control_from_vmcb02(struct vcpu_svm *svm);
> -void nested_vmcb02_compute_g_pat(struct vcpu_svm *svm);
>  void svm_switch_vmcb(struct vcpu_svm *svm, struct kvm_vmcb_info *target_vmcb);
>  
>  extern struct kvm_x86_nested_ops svm_nested_ops;
> -- 
> 2.53.0.239.g8d8fc8a987-goog
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 5/8] KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT
  2026-02-12 15:58 ` [PATCH v4 5/8] KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT Jim Mattson
@ 2026-02-13  0:33   ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:33 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:53AM -0800, Jim Mattson wrote:
> According to the APM volume 3 pseudo-code for "VMRUN," when nested paging
> is enabled in the vmcb, the guest PAT register (gPAT) is saved to the vmcb
> on emulated VMEXIT.
> 
> When nested NPT is enabled, save the vmcb02 g_pat field to the vmcb12 g_pat
> field on emulated VMEXIT.
> 
> Fixes: 15038e147247 ("KVM: SVM: obey guest PAT")
> Signed-off-by: Jim Mattson <jmattson@google.com>

Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>

> ---
>  arch/x86/kvm/svm/nested.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 69b577a4915c..26f758e294ab 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1312,6 +1312,9 @@ void nested_svm_vmexit(struct vcpu_svm *svm)
>  	vmcb12->save.dr6    = svm->vcpu.arch.dr6;
>  	vmcb12->save.cpl    = vmcb02->save.cpl;
>  
> +	if (nested_npt_enabled(svm))
> +		vmcb12->save.g_pat = vmcb02->save.g_pat;
> +
>  	if (guest_cpu_cap_has(vcpu, X86_FEATURE_SHSTK)) {
>  		vmcb12->save.s_cet	= vmcb02->save.s_cet;
>  		vmcb12->save.isst_addr	= vmcb02->save.isst_addr;
> -- 
> 2.53.0.239.g8d8fc8a987-goog
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 6/8] KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE
  2026-02-12 15:58 ` [PATCH v4 6/8] KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE Jim Mattson
@ 2026-02-13  0:36   ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:36 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:54AM -0800, Jim Mattson wrote:
> Add a 'flags' field to the SVM nested state header, and use bit 0 of the
> flags to indicate that gPAT is stored in the nested state.
> 
> If in guest mode with NPT enabled, store the current vmcb->save.g_pat value
> into the header of the nested state, and set the flag.
> 
> Note that struct kvm_svm_nested_state_hdr is included in a union padded to
> 120 bytes, so there is room to add the flags field and the gpat field
> without changing any offsets.
> 
> Fixes: cc440cdad5b7 ("KVM: nSVM: implement KVM_GET_NESTED_STATE and KVM_SET_NESTED_STATE")
> Signed-off-by: Jim Mattson <jmattson@google.com>

Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>

> ---
>  arch/x86/include/uapi/asm/kvm.h |  5 +++++
>  arch/x86/kvm/svm/nested.c       | 16 ++++++++++++++++
>  2 files changed, 21 insertions(+)
> 
> diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
> index 846a63215ce1..664d04d1db3f 100644
> --- a/arch/x86/include/uapi/asm/kvm.h
> +++ b/arch/x86/include/uapi/asm/kvm.h
> @@ -495,6 +495,8 @@ struct kvm_sync_regs {
>  
>  #define KVM_STATE_VMX_PREEMPTION_TIMER_DEADLINE	0x00000001
>  
> +#define KVM_STATE_SVM_VALID_GPAT	0x00000001
> +
>  /* vendor-independent attributes for system fd (group 0) */
>  #define KVM_X86_GRP_SYSTEM		0
>  #  define KVM_X86_XCOMP_GUEST_SUPP	0
> @@ -531,6 +533,9 @@ struct kvm_svm_nested_state_data {
>  
>  struct kvm_svm_nested_state_hdr {
>  	__u64 vmcb_pa;
> +	__u32 flags;
> +	__u32 reserved;
> +	__u64 gpat;
>  };
>  
>  /* for KVM_CAP_NESTED_STATE */
> diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
> index 26f758e294ab..f73f3e586012 100644
> --- a/arch/x86/kvm/svm/nested.c
> +++ b/arch/x86/kvm/svm/nested.c
> @@ -1893,6 +1893,10 @@ static int svm_get_nested_state(struct kvm_vcpu *vcpu,
>  	/* First fill in the header and copy it out.  */
>  	if (is_guest_mode(vcpu)) {
>  		kvm_state.hdr.svm.vmcb_pa = svm->nested.vmcb12_gpa;
> +		if (nested_npt_enabled(svm)) {
> +			kvm_state.hdr.svm.flags |= KVM_STATE_SVM_VALID_GPAT;
> +			kvm_state.hdr.svm.gpat = svm->nested.save.g_pat;
> +		}
>  		kvm_state.size += KVM_STATE_NESTED_SVM_VMCB_SIZE;
>  		kvm_state.flags |= KVM_STATE_NESTED_GUEST_MODE;
>  
> @@ -2022,6 +2026,14 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>  	    !nested_vmcb_check_save(vcpu, &save_cached, false))
>  		goto out_free;
>  
> +	/*
> +	 * Validate gPAT, if provided. This is done separately from the
> +	 * vmcb_save_area_cached validation above, because gPAT is L2
> +	 * state, but the vmcb_save_area_cached is populated with L1 state.
> +	 */
> +	if ((kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT) &&
> +	    !kvm_pat_valid(kvm_state->hdr.svm.gpat))
> +		goto out_free;
>  
>  	/*
>  	 * All checks done, we can enter guest mode. Userspace provides
> @@ -2061,6 +2073,10 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
>  	if (ret)
>  		goto out_free;
>  
> +	if (nested_npt_enabled(svm) &&
> +	    (kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT))
> +		svm_set_gpat(svm, kvm_state->hdr.svm.gpat);
> +
>  	svm_switch_vmcb(svm, &svm->nested.vmcb02);
>  	nested_vmcb02_prepare_control(svm, svm->vmcb->save.rip, svm->vmcb->save.cs.base);
>  
> -- 
> 2.53.0.239.g8d8fc8a987-goog
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 7/8] KVM: x86: nSVM: Handle restore of legacy nested state
  2026-02-12 15:58 ` [PATCH v4 7/8] KVM: x86: nSVM: Handle restore of legacy nested state Jim Mattson
@ 2026-02-13  0:38   ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13  0:38 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 07:58:55AM -0800, Jim Mattson wrote:
> When nested NPT is enabled and KVM_SET_NESTED_STATE is used to restore an
> old checkpoint (without a valid gPAT), the current IA32_PAT value must be
> used as L2's gPAT.
> 
> Unfortunately, checkpoint restore is non-atomic, and the order in which
> state components are restored is not specified. Hence, the current IA32_PAT
> value may be restored by KVM_SET_MSRS after KVM_SET_NESTED_STATE.  To
> further complicate matters, there may be a KVM_GET_NESTED_STATE before the
> next KVM_RUN.
> 
> Introduce a new boolean, svm->nested.legacy_gpat_semantics. When set, hPAT
> updates are also applied to gPAT, preserving the old behavior (i.e. L2
> shares L1's PAT). Set this boolean when restoring legacy state (i.e. nested
> NPT is enabled, but no GPAT is provided) in KVM_SET_NESTED_STATE. Clear
> this boolean in svm_vcpu_pre_run(), to ensure that hPAT and gPAT are
> decoupled before the vCPU resumes execution.
> 
> Signed-off-by: Jim Mattson <jmattson@google.com>

Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13  0:30   ` Yosry Ahmed
@ 2026-02-13 15:20     ` Sean Christopherson
  2026-02-13 15:42       ` Jim Mattson
  2026-02-13 15:43       ` Yosry Ahmed
  0 siblings, 2 replies; 34+ messages in thread
From: Sean Christopherson @ 2026-02-13 15:20 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

Please trim your replies.  Scrolling through 100+ lines of quoted text to find
the ~12 lines of context that actually matter is annoying.

On Fri, Feb 13, 2026, Yosry Ahmed wrote:
> On Thu, Feb 12, 2026 at 07:58:52AM -0800, Jim Mattson wrote:
> > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > index a49c48459e0b..88549705133f 100644
> > --- a/arch/x86/kvm/svm/svm.h
> > +++ b/arch/x86/kvm/svm/svm.h
> > @@ -607,6 +607,22 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
> >  	return svm->nested.ctl.misc_ctl & SVM_MISC_ENABLE_NP;
> >  }
> >  
> > +static inline void svm_set_gpat(struct vcpu_svm *svm, u64 data)
> > +{
> > +	svm->nested.save.g_pat = data;
> > +	vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > +}
> > +
> > +static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
> > +{
> > +	svm->vcpu.arch.pat = data;
> > +	if (npt_enabled) {

Peeking at the future patches, if we make this:

	if (!npt_enabled)
		return;

then we can end up with this:

	if (npt_enabled)
		return;

	vmcb_set_gpat(svm->vmcb01.ptr, data);
	if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
		vmcb_set_gpat(svm->nested.vmcb02.ptr, data);

	if (svm->nested.legacy_gpat_semantics)
		svm_set_l2_pat(svm, data);

Because legacy_gpat_semantics can only be true if npt_enabled is true.  Without
that guard, KVM _looks_ buggy because it's setting gpat in the VMCB even when
it shouldn't exist.

Actually, calling svm_set_l2_pat() when !is_guest_mode() is wrong too, no?  E.g.
shouldn't we end up with this?

  static inline void svm_set_l1_pat(struct vcpu_svm *svm, u64 data)
  {
	svm->vcpu.arch.pat = data;

	if (npt_enabled)
		return;

	vmcb_set_gpat(svm->vmcb01.ptr, data);

	if (is_guest_mode(&svm->vcpu)) {
		if (svm->nested.legacy_gpat_semantics)
			svm_set_l2_pat(svm, data);
		else if (!nested_npt_enabled(svm))
			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
	}
  }


> > +		vmcb_set_gpat(svm->vmcb01.ptr, data);
> > +		if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> > +			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > +	}
> > +}
> 
> Is it me, or is it a bit confusing that svm_set_gpat() sets L2's gPAT
> not L1's, and svm_set_hpat() calls vmcb_set_gpat()?

It's not just you.  I don't find it confusing per se, more that it's really
subtle.

> "gpat" means different things in the context of the VMCB or otherwise,
> which kinda makes sense but is also not super clear. Maybe
> svm_set_l1_gpat() and svm_set_l2_gpat() is more clear?

I think just svm_set_l1_pat() and svm_set_l2_pat(), because gpat straight up
doesn't exist when NPT is disabled/unsupported.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode
  2026-02-13  0:17   ` Yosry Ahmed
@ 2026-02-13 15:26     ` Sean Christopherson
  2026-02-13 15:32       ` Yosry Ahmed
  0 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2026-02-13 15:26 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026, Yosry Ahmed wrote:
> On Thu, Feb 12, 2026 at 07:58:49AM -0800, Jim Mattson wrote:
> > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > index 0bb93879abfe..9850ed01e16e 100644
> > --- a/arch/x86/kvm/svm/svm.h
> > +++ b/arch/x86/kvm/svm/svm.h
> > @@ -434,14 +434,15 @@ static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
> >  	vmcb->control.clean &= ~(1 << bit);
> >  }
> >  
> > -static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
> 
> Huh, I assume the removed of vmcb_is_dirty() was not intentional?

Regardless of whether or not it was intentional, IMO it's a good change.  KVM
should never check vmcb12 directly, and I can't think of a legitimate case where
KVM should condition its behavior on vmcb0{1,2} being clean/dirty.

Unless a v5 is needed, I'll split it to a separate patch when applying.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode
  2026-02-13 15:26     ` Sean Christopherson
@ 2026-02-13 15:32       ` Yosry Ahmed
  2026-02-13 15:46         ` Jim Mattson
  0 siblings, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13 15:32 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026 at 07:26:34AM -0800, Sean Christopherson wrote:
> On Fri, Feb 13, 2026, Yosry Ahmed wrote:
> > On Thu, Feb 12, 2026 at 07:58:49AM -0800, Jim Mattson wrote:
> > > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > > index 0bb93879abfe..9850ed01e16e 100644
> > > --- a/arch/x86/kvm/svm/svm.h
> > > +++ b/arch/x86/kvm/svm/svm.h
> > > @@ -434,14 +434,15 @@ static inline void vmcb_mark_dirty(struct vmcb *vmcb, int bit)
> > >  	vmcb->control.clean &= ~(1 << bit);
> > >  }
> > >  
> > > -static inline bool vmcb_is_dirty(struct vmcb *vmcb, int bit)
> > 
> > Huh, I assume the removed of vmcb_is_dirty() was not intentional?
> 
> Regardless of whether or not it was intentional, IMO it's a good change.  KVM
> should never check vmcb12 directly, and I can't think of a legitimate case where
> KVM should condition its behavior on vmcb0{1,2} being clean/dirty.

Funny enough, I removed all usages of vmcb_is_dirty() in my series, I
just didn't drop it:

https://lore.kernel.org/kvm/20260206190851.860662-24-yosry.ahmed@linux.dev/

So Jim was cleaning up after me :)

> 
> Unless a v5 is needed, I'll split it to a separate patch when applying.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13 15:20     ` Sean Christopherson
@ 2026-02-13 15:42       ` Jim Mattson
  2026-02-13 22:19         ` Sean Christopherson
  2026-02-13 15:43       ` Yosry Ahmed
  1 sibling, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-13 15:42 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yosry Ahmed, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026 at 7:20 AM Sean Christopherson <seanjc@google.com> wrote:
>
> Please trim your replies.  Scrolling through 100+ lines of quoted text to find
> the ~12 lines of context that actually matter is annoying.
>
> On Fri, Feb 13, 2026, Yosry Ahmed wrote:
> > On Thu, Feb 12, 2026 at 07:58:52AM -0800, Jim Mattson wrote:
> > > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> > > index a49c48459e0b..88549705133f 100644
> > > --- a/arch/x86/kvm/svm/svm.h
> > > +++ b/arch/x86/kvm/svm/svm.h
> > > @@ -607,6 +607,22 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
> > >     return svm->nested.ctl.misc_ctl & SVM_MISC_ENABLE_NP;
> > >  }
> > >
> > > +static inline void svm_set_gpat(struct vcpu_svm *svm, u64 data)
> > > +{
> > > +   svm->nested.save.g_pat = data;
> > > +   vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > > +}
> > > +
> > > +static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
> > > +{
> > > +   svm->vcpu.arch.pat = data;
> > > +   if (npt_enabled) {
>
> Peeking at the future patches, if we make this:
>
>         if (!npt_enabled)
>                 return;
>
> then we can end up with this:
>
>         if (npt_enabled)
>                 return;
>
>         vmcb_set_gpat(svm->vmcb01.ptr, data);
>         if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
>                 vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
>
>         if (svm->nested.legacy_gpat_semantics)
>                 svm_set_l2_pat(svm, data);
>
> Because legacy_gpat_semantics can only be true if npt_enabled is true.  Without
> that guard, KVM _looks_ buggy because it's setting gpat in the VMCB even when
> it shouldn't exist.
>
> Actually, calling svm_set_l2_pat() when !is_guest_mode() is wrong too, no?  E.g.
> shouldn't we end up with this?

Sigh. legacy_gpat_semantics is supposed to be set only when
is_guest_mode() and nested_npt_enabled(). I forgot about back-to-back
invocations of KVM_SET_NESTED_STATE. Are there other ways of leaving
guest mode or disabling nested NPT before the next KVM_RUN?

>   static inline void svm_set_l1_pat(struct vcpu_svm *svm, u64 data)
>   {
>         svm->vcpu.arch.pat = data;
>
>         if (npt_enabled)
>                 return;
>
>         vmcb_set_gpat(svm->vmcb01.ptr, data);
>
>         if (is_guest_mode(&svm->vcpu)) {
>                 if (svm->nested.legacy_gpat_semantics)
>                         svm_set_l2_pat(svm, data);
>                 else if (!nested_npt_enabled(svm))
>                         vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
>         }
>   }
>
>
> > > +           vmcb_set_gpat(svm->vmcb01.ptr, data);
> > > +           if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> > > +                   vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > > +   }
> > > +}
> >
> > Is it me, or is it a bit confusing that svm_set_gpat() sets L2's gPAT
> > not L1's, and svm_set_hpat() calls vmcb_set_gpat()?
>
> It's not just you.  I don't find it confusing per se, more that it's really
> subtle.
>
> > "gpat" means different things in the context of the VMCB or otherwise,
> > which kinda makes sense but is also not super clear. Maybe
> > svm_set_l1_gpat() and svm_set_l2_gpat() is more clear?
>
> I think just svm_set_l1_pat() and svm_set_l2_pat(), because gpat straight up
> doesn't exist when NPT is disabled/unsupported.

My intention was that "gpat" and "hpat" were from the perspective of the vCPU.

I dislike svm_set_l1_pat() and svm_set_l2_pat(). As you point out
above, there is no independent L2 PAT when nested NPT is disabled. I
think that's less obvious than the fact that there is no gPAT from the
vCPU's perspective. My preference is to follow the APM terminology
when possible. Making up our own terms just leads to confusion.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13 15:20     ` Sean Christopherson
  2026-02-13 15:42       ` Jim Mattson
@ 2026-02-13 15:43       ` Yosry Ahmed
  2026-02-13 15:44         ` Yosry Ahmed
  1 sibling, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13 15:43 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026 at 07:20:28AM -0800, Sean Christopherson wrote:
> Please trim your replies.  Scrolling through 100+ lines of quoted text to find
> the ~12 lines of context that actually matter is annoying.

Ack.
 
> Actually, calling svm_set_l2_pat() when !is_guest_mode() is wrong too, no?  E.g.
> shouldn't we end up with this?

legacy_gpat_semantics should only be set in guest mode, and it's cleared
in pre-run, so before we can exit guest mode IIUC. But having the guard
in place for both cases is probably simpler anyway.

> 
>   static inline void svm_set_l1_pat(struct vcpu_svm *svm, u64 data)
>   {
> 	svm->vcpu.arch.pat = data;
> 
> 	if (npt_enabled)
> 		return;
> 
> 	vmcb_set_gpat(svm->vmcb01.ptr, data);
> 
> 	if (is_guest_mode(&svm->vcpu)) {
> 		if (svm->nested.legacy_gpat_semantics)
> 			svm_set_l2_pat(svm, data);
> 		else if (!nested_npt_enabled(svm))
> 			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> 	}
>   }
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13 15:43       ` Yosry Ahmed
@ 2026-02-13 15:44         ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-13 15:44 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026 at 03:43:12PM +0000, Yosry Ahmed wrote:
> On Fri, Feb 13, 2026 at 07:20:28AM -0800, Sean Christopherson wrote:
> > Please trim your replies.  Scrolling through 100+ lines of quoted text to find
> > the ~12 lines of context that actually matter is annoying.
> 
> Ack.
>  
> > Actually, calling svm_set_l2_pat() when !is_guest_mode() is wrong too, no?  E.g.
> > shouldn't we end up with this?
> 
> legacy_gpat_semantics should only be set in guest mode, and it's cleared
> in pre-run, so before we can exit guest mode IIUC.

Never mind that, Jim just mentioned the cases of back-to-back
KVM_SET_NESTED_STATE.


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode
  2026-02-13 15:32       ` Yosry Ahmed
@ 2026-02-13 15:46         ` Jim Mattson
  0 siblings, 0 replies; 34+ messages in thread
From: Jim Mattson @ 2026-02-13 15:46 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026 at 7:33 AM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:
> Funny enough, I removed all usages of vmcb_is_dirty() in my series, I
> just didn't drop it:
>
> https://lore.kernel.org/kvm/20260206190851.860662-24-yosry.ahmed@linux.dev/
>
> So Jim was cleaning up after me :)

Sorry; I removed it when handling the merge conflicts, thought about
moving it to a separate patch, and then forgot about it.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13 15:42       ` Jim Mattson
@ 2026-02-13 22:19         ` Sean Christopherson
  2026-02-13 23:31           ` Jim Mattson
  0 siblings, 1 reply; 34+ messages in thread
From: Sean Christopherson @ 2026-02-13 22:19 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Yosry Ahmed, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026, Jim Mattson wrote:
> On Fri, Feb 13, 2026 at 7:20 AM Sean Christopherson <seanjc@google.com> wrote:
> > > > +static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
> > > > +{
> > > > +   svm->vcpu.arch.pat = data;
> > > > +   if (npt_enabled) {
> >
> > Peeking at the future patches, if we make this:
> >
> >         if (!npt_enabled)
> >                 return;
> >
> > then we can end up with this:
> >
> >         if (npt_enabled)
> >                 return;
> >
> >         vmcb_set_gpat(svm->vmcb01.ptr, data);
> >         if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> >                 vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> >
> >         if (svm->nested.legacy_gpat_semantics)
> >                 svm_set_l2_pat(svm, data);
> >
> > Because legacy_gpat_semantics can only be true if npt_enabled is true.  Without
> > that guard, KVM _looks_ buggy because it's setting gpat in the VMCB even when
> > it shouldn't exist.
> >
> > Actually, calling svm_set_l2_pat() when !is_guest_mode() is wrong too, no?  E.g.
> > shouldn't we end up with this?
> 
> Sigh. legacy_gpat_semantics is supposed to be set only when
> is_guest_mode() and nested_npt_enabled(). I forgot about back-to-back
> invocations of KVM_SET_NESTED_STATE. Are there other ways of leaving
> guest mode or disabling nested NPT before the next KVM_RUN?

KVM_SET_VCPU_EVENTS will do it if userspace forces a change in SMM state:

		if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm) {
			kvm_leave_nested(vcpu);
			kvm_smm_changed(vcpu, events->smi.smm);
		}

I honestly wasn't even thinking of anything in particular, it just looked weird.

> > > > +           vmcb_set_gpat(svm->vmcb01.ptr, data);
> > > > +           if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> > > > +                   vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > > > +   }
> > > > +}
> > >
> > > Is it me, or is it a bit confusing that svm_set_gpat() sets L2's gPAT
> > > not L1's, and svm_set_hpat() calls vmcb_set_gpat()?
> >
> > It's not just you.  I don't find it confusing per se, more that it's really
> > subtle.
> >
> > > "gpat" means different things in the context of the VMCB or otherwise,
> > > which kinda makes sense but is also not super clear. Maybe
> > > svm_set_l1_gpat() and svm_set_l2_gpat() is more clear?
> >
> > I think just svm_set_l1_pat() and svm_set_l2_pat(), because gpat straight up
> > doesn't exist when NPT is disabled/unsupported.
> 
> My intention was that "gpat" and "hpat" were from the perspective of the vCPU.
> 
> I dislike svm_set_l1_pat() and svm_set_l2_pat(). As you point out
> above, there is no independent L2 PAT when nested NPT is disabled. I
> think that's less obvious than the fact that there is no gPAT from the
> vCPU's perspective. My preference is to follow the APM terminology
> when possible. Making up our own terms just leads to confusion.

How about svm_set_pat() and svm_get_gpat()?  Because hPAT doesn't exist when NPT
is unsupported/disabled, but KVM still needs to set the vCPU's emulated PAT value.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13 22:19         ` Sean Christopherson
@ 2026-02-13 23:31           ` Jim Mattson
  2026-02-17 23:27             ` Sean Christopherson
  0 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-13 23:31 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yosry Ahmed, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026 at 2:19 PM Sean Christopherson <seanjc@google.com> wrote:
>
> On Fri, Feb 13, 2026, Jim Mattson wrote:
> > On Fri, Feb 13, 2026 at 7:20 AM Sean Christopherson <seanjc@google.com> wrote:
> > > > > +static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
> > > > > +{
> > > > > +   svm->vcpu.arch.pat = data;
> > > > > +   if (npt_enabled) {
> > >
> > > Peeking at the future patches, if we make this:
> > >
> > >         if (!npt_enabled)
> > >                 return;
> > >
> > > then we can end up with this:
> > >
> > >         if (npt_enabled)
> > >                 return;
> > >
> > >         vmcb_set_gpat(svm->vmcb01.ptr, data);
> > >         if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> > >                 vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > >
> > >         if (svm->nested.legacy_gpat_semantics)
> > >                 svm_set_l2_pat(svm, data);
> > >
> > > Because legacy_gpat_semantics can only be true if npt_enabled is true.  Without
> > > that guard, KVM _looks_ buggy because it's setting gpat in the VMCB even when
> > > it shouldn't exist.
> > >
> > > Actually, calling svm_set_l2_pat() when !is_guest_mode() is wrong too, no?  E.g.
> > > shouldn't we end up with this?
> >
> > Sigh. legacy_gpat_semantics is supposed to be set only when
> > is_guest_mode() and nested_npt_enabled(). I forgot about back-to-back
> > invocations of KVM_SET_NESTED_STATE. Are there other ways of leaving
> > guest mode or disabling nested NPT before the next KVM_RUN?
>
> KVM_SET_VCPU_EVENTS will do it if userspace forces a change in SMM state:
>
>                 if (!!(vcpu->arch.hflags & HF_SMM_MASK) != events->smi.smm) {
>                         kvm_leave_nested(vcpu);
>                         kvm_smm_changed(vcpu, events->smi.smm);
>                 }
>
> I honestly wasn't even thinking of anything in particular, it just looked weird.

At the very least, then, kvm_leave_nested() has to clear
legacy_gpat_semantics. I will look for other paths.

> > > > > +           vmcb_set_gpat(svm->vmcb01.ptr, data);
> > > > > +           if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> > > > > +                   vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > > > > +   }
> > > > > +}
> > > >
> > > > Is it me, or is it a bit confusing that svm_set_gpat() sets L2's gPAT
> > > > not L1's, and svm_set_hpat() calls vmcb_set_gpat()?
> > >
> > > It's not just you.  I don't find it confusing per se, more that it's really
> > > subtle.
> > >
> > > > "gpat" means different things in the context of the VMCB or otherwise,
> > > > which kinda makes sense but is also not super clear. Maybe
> > > > svm_set_l1_gpat() and svm_set_l2_gpat() is more clear?
> > >
> > > I think just svm_set_l1_pat() and svm_set_l2_pat(), because gpat straight up
> > > doesn't exist when NPT is disabled/unsupported.
> >
> > My intention was that "gpat" and "hpat" were from the perspective of the vCPU.
> >
> > I dislike svm_set_l1_pat() and svm_set_l2_pat(). As you point out
> > above, there is no independent L2 PAT when nested NPT is disabled. I
> > think that's less obvious than the fact that there is no gPAT from the
> > vCPU's perspective. My preference is to follow the APM terminology
> > when possible. Making up our own terms just leads to confusion.
>
> How about svm_set_pat() and svm_get_gpat()?  Because hPAT doesn't exist when NPT
> is unsupported/disabled, but KVM still needs to set the vCPU's emulated PAT value.

What if we don't break it up this way at all? Instead of distributing
the logic between svm_[gs]set_msr() and a few helper functions, we
could just have svm_[gs]et_msr() call svm_[gs]et_pat(), and all of the
logic can go in these two functions.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-13 23:31           ` Jim Mattson
@ 2026-02-17 23:27             ` Sean Christopherson
  2026-02-17 23:40               ` Yosry Ahmed
  2026-03-26 21:18               ` Jim Mattson
  0 siblings, 2 replies; 34+ messages in thread
From: Sean Christopherson @ 2026-02-17 23:27 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Yosry Ahmed, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Fri, Feb 13, 2026, Jim Mattson wrote:
> On Fri, Feb 13, 2026 at 2:19 PM Sean Christopherson <seanjc@google.com> wrote:
> > > > > > +           vmcb_set_gpat(svm->vmcb01.ptr, data);
> > > > > > +           if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
> > > > > > +                   vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
> > > > > > +   }
> > > > > > +}
> > > > >
> > > > > Is it me, or is it a bit confusing that svm_set_gpat() sets L2's gPAT
> > > > > not L1's, and svm_set_hpat() calls vmcb_set_gpat()?
> > > >
> > > > It's not just you.  I don't find it confusing per se, more that it's really
> > > > subtle.
> > > >
> > > > > "gpat" means different things in the context of the VMCB or otherwise,
> > > > > which kinda makes sense but is also not super clear. Maybe
> > > > > svm_set_l1_gpat() and svm_set_l2_gpat() is more clear?
> > > >
> > > > I think just svm_set_l1_pat() and svm_set_l2_pat(), because gpat straight up
> > > > doesn't exist when NPT is disabled/unsupported.
> > >
> > > My intention was that "gpat" and "hpat" were from the perspective of the vCPU.
> > >
> > > I dislike svm_set_l1_pat() and svm_set_l2_pat(). As you point out
> > > above, there is no independent L2 PAT when nested NPT is disabled. I
> > > think that's less obvious than the fact that there is no gPAT from the
> > > vCPU's perspective. My preference is to follow the APM terminology
> > > when possible. Making up our own terms just leads to confusion.
> >
> > How about svm_set_pat() and svm_get_gpat()?  Because hPAT doesn't exist when NPT
> > is unsupported/disabled, but KVM still needs to set the vCPU's emulated PAT value.
> 
> What if we don't break it up this way at all? Instead of distributing
> the logic between svm_[gs]set_msr() and a few helper functions, we
> could just have svm_[gs]et_msr() call svm_[gs]et_pat(), and all of the
> logic can go in these two functions.

I like it.  And AFAICT it largely Just Works, because the calls from
svm_set_nested_state() will always be routed to gpat since the calls are already
guarded with is_guest_mode() + nested_npt_enabled().

Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
(to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT should
be fully forked between VMX and SVM.  As of this patch, it's not just misleading,
it's actively dangerous since calling kvm_get_msr_common() for SVM would get the
wrong value.

FWIW, this is what I ended up with when hacking on top of your patches to see how
this played out.

---
 arch/x86/kvm/svm/nested.c |  4 +--
 arch/x86/kvm/svm/svm.c    | 64 +++++++++++++++++++++++++--------------
 arch/x86/kvm/svm/svm.h    | 19 +-----------
 arch/x86/kvm/vmx/vmx.c    | 10 ++++--
 arch/x86/kvm/x86.c        |  9 ------
 5 files changed, 51 insertions(+), 55 deletions(-)

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index d854d29b0bd8..361f189d3967 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -2075,9 +2075,9 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
 
 	if (nested_npt_enabled(svm)) {
 		if (kvm_state->hdr.svm.flags & KVM_STATE_SVM_VALID_GPAT) {
-			svm_set_gpat(svm, kvm_state->hdr.svm.gpat);
+			svm_set_pat(vcpu, kvm_state->hdr.svm.gpat, true);
 		} else {
-			svm_set_gpat(svm, vcpu->arch.pat);
+			svm_set_pat(vcpu, vcpu->arch.pat, true);
 			svm->nested.legacy_gpat_semantics = true;
 		}
 	}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 93ce0c3232c6..94c3b3cadd54 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -251,6 +251,44 @@ int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer)
 	return 0;
 }
 
+static bool svm_is_access_to_gpat(struct kvm_vcpu *vcpu, bool host_initiated)
+{
+	/*
+	 * When nested NPT is enabled, L2 has a separate PAT from L1.  Guest
+	 * accesses to IA32_PAT while running L2 target L2's gPAT;
+	 * host-initiated accesses always target L1's hPAT for backward and
+	 * forward KVM_SET_MSRS compatibility with older kernels.
+	 */
+	WARN_ON_ONCE(host_initiated && vcpu->wants_to_run);
+
+	return !host_initiated && is_guest_mode(vcpu) &&
+	       nested_npt_enabled(to_svm(vcpu));
+}
+
+void svm_set_pat(struct kvm_vcpu *vcpu, u64 pat, bool host_initiated)
+{
+	struct vcpu_svm *svm = to_svm(vcpu);
+
+	if (svm_is_access_to_gpat(vcpu, host_initiated)) {
+		vmcb_set_gpat(svm->nested.vmcb02.ptr, pat);
+		return;
+	}
+
+	svm->vcpu.arch.pat = pat;
+
+	if (!npt_enabled)
+		return;
+
+	vmcb_set_gpat(svm->vmcb01.ptr, pat);
+
+	if (svm->nested.legacy_gpat_semantics) {
+		svm->nested.save.g_pat = pat;
+		vmcb_set_gpat(svm->nested.vmcb02.ptr, pat);
+	} else if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm)) {
+		vmcb_set_gpat(svm->nested.vmcb02.ptr, pat);
+	}
+}
+
 static u32 svm_get_interrupt_shadow(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
@@ -2838,16 +2876,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		msr_info->data = svm->msr_decfg;
 		break;
 	case MSR_IA32_CR_PAT:
-		/*
-		 * When nested NPT is enabled, L2 has a separate PAT from
-		 * L1.  Guest accesses to IA32_PAT while running L2 target
-		 * L2's gPAT; host-initiated accesses always target L1's
-		 * hPAT for backward and forward KVM_GET_MSRS compatibility
-		 * with older kernels.
-		 */
-		WARN_ON_ONCE(msr_info->host_initiated && vcpu->wants_to_run);
-		if (!msr_info->host_initiated && is_guest_mode(vcpu) &&
-		    nested_npt_enabled(svm))
+		if (svm_is_access_to_gpat(vcpu, msr_info->host_initiated))
 			msr_info->data = svm->nested.save.g_pat;
 		else
 			msr_info->data = vcpu->arch.pat;
@@ -2937,19 +2966,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 	case MSR_IA32_CR_PAT:
 		if (!kvm_pat_valid(data))
 			return 1;
-		/*
-		 * When nested NPT is enabled, L2 has a separate PAT from
-		 * L1.  Guest accesses to IA32_PAT while running L2 target
-		 * L2's gPAT; host-initiated accesses always target L1's
-		 * hPAT for backward and forward KVM_SET_MSRS compatibility
-		 * with older kernels.
-		 */
-		WARN_ON_ONCE(msr->host_initiated && vcpu->wants_to_run);
-		if (!msr->host_initiated && is_guest_mode(vcpu) &&
-		    nested_npt_enabled(svm))
-			svm_set_gpat(svm, data);
-		else
-			svm_set_hpat(svm, data);
+
+		svm_set_pat(vcpu, data, msr->host_initiated);
 		break;
 	case MSR_IA32_SPEC_CTRL:
 		if (!msr->host_initiated &&
diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
index 0bb9fdcb489d..71502db3f679 100644
--- a/arch/x86/kvm/svm/svm.h
+++ b/arch/x86/kvm/svm/svm.h
@@ -616,24 +616,6 @@ static inline bool nested_npt_enabled(struct vcpu_svm *svm)
 	return svm->nested.ctl.misc_ctl & SVM_MISC_ENABLE_NP;
 }
 
-static inline void svm_set_gpat(struct vcpu_svm *svm, u64 data)
-{
-	svm->nested.save.g_pat = data;
-	vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
-}
-
-static inline void svm_set_hpat(struct vcpu_svm *svm, u64 data)
-{
-	svm->vcpu.arch.pat = data;
-	if (npt_enabled) {
-		vmcb_set_gpat(svm->vmcb01.ptr, data);
-		if (is_guest_mode(&svm->vcpu) && !nested_npt_enabled(svm))
-			vmcb_set_gpat(svm->nested.vmcb02.ptr, data);
-	}
-	if (svm->nested.legacy_gpat_semantics)
-		svm_set_gpat(svm, data);
-}
-
 static inline bool nested_vnmi_enabled(struct vcpu_svm *svm)
 {
 	return guest_cpu_cap_has(&svm->vcpu, X86_FEATURE_VNMI) &&
@@ -780,6 +762,7 @@ void svm_enable_lbrv(struct kvm_vcpu *vcpu);
 void svm_update_lbrv(struct kvm_vcpu *vcpu);
 
 int svm_set_efer(struct kvm_vcpu *vcpu, u64 efer);
+void svm_set_pat(struct kvm_vcpu *vcpu, u64 pat, bool host_initiated);
 void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
 void svm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4);
 void disable_nmi_singlestep(struct vcpu_svm *svm);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 967b58a8ab9d..546056e690eb 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2141,6 +2141,9 @@ int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 #endif
 	case MSR_EFER:
 		return kvm_get_msr_common(vcpu, msr_info);
+	case MSR_IA32_CR_PAT:
+		msr_info->data = vcpu->arch.pat;
+		break;
 	case MSR_IA32_TSX_CTRL:
 		if (!msr_info->host_initiated &&
 		    !(vcpu->arch.arch_capabilities & ARCH_CAP_TSX_CTRL_MSR))
@@ -2468,9 +2471,10 @@ int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		goto find_uret_msr;
 	case MSR_IA32_CR_PAT:
-		ret = kvm_set_msr_common(vcpu, msr_info);
-		if (ret)
-			break;
+		if (!kvm_pat_valid(data))
+			return 1;
+
+		vcpu->arch.pat = data;
 
 		if (is_guest_mode(vcpu) &&
 		    get_vmcs12(vcpu)->vm_exit_controls & VM_EXIT_SAVE_IA32_PAT)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 416899b5dbe4..41936f83a17f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4025,12 +4025,6 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 			return 1;
 		}
 		break;
-	case MSR_IA32_CR_PAT:
-		if (!kvm_pat_valid(data))
-			return 1;
-
-		vcpu->arch.pat = data;
-		break;
 	case MTRRphysBase_MSR(0) ... MSR_MTRRfix4K_F8000:
 	case MSR_MTRRdefType:
 		return kvm_mtrr_set_msr(vcpu, msr, data);
@@ -4436,9 +4430,6 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 		msr_info->data = kvm_scale_tsc(rdtsc(), ratio) + offset;
 		break;
 	}
-	case MSR_IA32_CR_PAT:
-		msr_info->data = vcpu->arch.pat;
-		break;
 	case MSR_MTRRcap:
 	case MTRRphysBase_MSR(0) ... MSR_MTRRfix4K_F8000:
 	case MSR_MTRRdefType:

base-commit: 7539434a6984ba5accfdd8e296fb834558f95df4
--

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-17 23:27             ` Sean Christopherson
@ 2026-02-17 23:40               ` Yosry Ahmed
  2026-02-17 23:44                 ` Sean Christopherson
  2026-03-26 21:18               ` Jim Mattson
  1 sibling, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-17 23:40 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

> I like it.  And AFAICT it largely Just Works, because the calls from
> svm_set_nested_state() will always be routed to gpat since the calls are already
> guarded with is_guest_mode() + nested_npt_enabled().
> 
> Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
> (to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT should
> be fully forked between VMX and SVM.  As of this patch, it's not just misleading,
> it's actively dangerous since calling kvm_get_msr_common() for SVM would get the
> wrong value.

+1 on both points.

> FWIW, this is what I ended up with when hacking on top of your patches to see how
> this played out.
> 
> ---
> @@ -2838,16 +2876,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
>  		msr_info->data = svm->msr_decfg;
>  		break;
>  	case MSR_IA32_CR_PAT:
> -		/*
> -		 * When nested NPT is enabled, L2 has a separate PAT from
> -		 * L1.  Guest accesses to IA32_PAT while running L2 target
> -		 * L2's gPAT; host-initiated accesses always target L1's
> -		 * hPAT for backward and forward KVM_GET_MSRS compatibility
> -		 * with older kernels.
> -		 */
> -		WARN_ON_ONCE(msr_info->host_initiated && vcpu->wants_to_run);
> -		if (!msr_info->host_initiated && is_guest_mode(vcpu) &&
> -		    nested_npt_enabled(svm))
> +		if (svm_is_access_to_gpat(vcpu, msr_info->host_initiated))
>  			msr_info->data = svm->nested.save.g_pat;
>  		else
>  			msr_info->data = vcpu->arch.pat;

I'd go a step further here and add svm_get_pat(), then this just
becomes:

	msr_info->data = svm_get_pat(vcpu, msr_info->host_initiated);

It's more consistent with svm_set_msr(), and completely abstracts the L1
vs. L2 PAT logic with the helpers.

> @@ -2937,19 +2966,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
>  	case MSR_IA32_CR_PAT:
>  		if (!kvm_pat_valid(data))
>  			return 1;
> -		/*
> -		 * When nested NPT is enabled, L2 has a separate PAT from
> -		 * L1.  Guest accesses to IA32_PAT while running L2 target
> -		 * L2's gPAT; host-initiated accesses always target L1's
> -		 * hPAT for backward and forward KVM_SET_MSRS compatibility
> -		 * with older kernels.
> -		 */
> -		WARN_ON_ONCE(msr->host_initiated && vcpu->wants_to_run);
> -		if (!msr->host_initiated && is_guest_mode(vcpu) &&
> -		    nested_npt_enabled(svm))
> -			svm_set_gpat(svm, data);
> -		else
> -			svm_set_hpat(svm, data);
> +
> +		svm_set_pat(vcpu, data, msr->host_initiated);
>  		break;
>  	case MSR_IA32_SPEC_CTRL:
>  		if (!msr->host_initiated &&

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-17 23:40               ` Yosry Ahmed
@ 2026-02-17 23:44                 ` Sean Christopherson
  0 siblings, 0 replies; 34+ messages in thread
From: Sean Christopherson @ 2026-02-17 23:44 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Jim Mattson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Tue, Feb 17, 2026, Yosry Ahmed wrote:
> > @@ -2838,16 +2876,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> >  		msr_info->data = svm->msr_decfg;
> >  		break;
> >  	case MSR_IA32_CR_PAT:
> > -		/*
> > -		 * When nested NPT is enabled, L2 has a separate PAT from
> > -		 * L1.  Guest accesses to IA32_PAT while running L2 target
> > -		 * L2's gPAT; host-initiated accesses always target L1's
> > -		 * hPAT for backward and forward KVM_GET_MSRS compatibility
> > -		 * with older kernels.
> > -		 */
> > -		WARN_ON_ONCE(msr_info->host_initiated && vcpu->wants_to_run);
> > -		if (!msr_info->host_initiated && is_guest_mode(vcpu) &&
> > -		    nested_npt_enabled(svm))
> > +		if (svm_is_access_to_gpat(vcpu, msr_info->host_initiated))
> >  			msr_info->data = svm->nested.save.g_pat;
> >  		else
> >  			msr_info->data = vcpu->arch.pat;
> 
> I'd go a step further here and add svm_get_pat(), then this just
> becomes:
> 
> 	msr_info->data = svm_get_pat(vcpu, msr_info->host_initiated);
> 
> It's more consistent with svm_set_msr(), and completely abstracts the L1
> vs. L2 PAT logic with the helpers.

Either way works for me.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat
  2026-02-13  0:22   ` Yosry Ahmed
@ 2026-02-20 22:26     ` Jim Mattson
  2026-02-20 23:25       ` Yosry Ahmed
  0 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-02-20 22:26 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Thu, Feb 12, 2026 at 4:22 PM Yosry Ahmed <yosry.ahmed@linux.dev> wrote:

> > @@ -2006,13 +2012,16 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
> >
> >       /*
> >        * Validate host state saved from before VMRUN (see
> > -      * nested_svm_check_permissions).
> > +      * nested_svm_check_permissions). Note that the g_pat field is not
> > +      * validated, because (a) it may have been clobbered by SMM before
> > +      * KVM_GET_NESTED_STATE, and (b) it is not loaded at emulated
> > +      * #VMEXIT.
>
> (b) here means that svm_copy_vmrun_state() does not copy it to vmcb01,
> and the value is restored by KVM_SET_MSRS, right?

Actually, (b) refers to the open-coded block of assignments in
nested_svm_vmexit() under the comment:

        /*
         * Restore processor state that had been saved in vmcb01
         */

> If my understanding is correct:
>
> Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat
  2026-02-20 22:26     ` Jim Mattson
@ 2026-02-20 23:25       ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-02-20 23:25 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

February 21, 2026 at 12:26 AM, "Jim Mattson" <jmattson@google.com> wrote:
> 
> On Thu, Feb 12, 2026 at 4:22 PM Yosry Ahmed <> wrote:
> 
> > 
> > @@ -2006,13 +2012,16 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
> > 
> >  /*
> >  * Validate host state saved from before VMRUN (see
> >  - * nested_svm_check_permissions).
> >  + * nested_svm_check_permissions). Note that the g_pat field is not
> >  + * validated, because (a) it may have been clobbered by SMM before
> >  + * KVM_GET_NESTED_STATE, and (b) it is not loaded at emulated
> >  + * #VMEXIT.
> > 
> >  (b) here means that svm_copy_vmrun_state() does not copy it to vmcb01,
> >  and the value is restored by KVM_SET_MSRS, right?
> > 
> Actually, (b) refers to the open-coded block of assignments in
> nested_svm_vmexit() under the comment:
> 
>  /*
>  * Restore processor state that had been saved in vmcb01
>  */
>

Yeah IIUC it's the same thing, we migrate them and copy them here to vmcb01 so that we can restore them in nested_svm_vmexit().

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-02-17 23:27             ` Sean Christopherson
  2026-02-17 23:40               ` Yosry Ahmed
@ 2026-03-26 21:18               ` Jim Mattson
  2026-03-26 21:26                 ` Yosry Ahmed
  1 sibling, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-03-26 21:18 UTC (permalink / raw)
  To: Sean Christopherson
  Cc: Yosry Ahmed, Paolo Bonzini, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin, Shuah Khan,
	kvm, linux-kernel, linux-kselftest

On Tue, Feb 17, 2026 at 3:27 PM Sean Christopherson <seanjc@google.com> wrote:

> Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
> (to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT should
> be fully forked between VMX and SVM.  As of this patch, it's not just misleading,
> it's actively dangerous since calling kvm_get_msr_common() for SVM would get the
> wrong value.

Though I included this change in v5 and v6, TIL that TDX calls
kvm_[gs]et_msr_common(MSR_IA32_CR_PAT), so the common handling is not
fully forked.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-03-26 21:18               ` Jim Mattson
@ 2026-03-26 21:26                 ` Yosry Ahmed
  2026-03-26 21:56                   ` Jim Mattson
  0 siblings, 1 reply; 34+ messages in thread
From: Yosry Ahmed @ 2026-03-26 21:26 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Yosry Ahmed, Paolo Bonzini, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Shuah Khan, kvm, linux-kernel, linux-kselftest

On Thu, Mar 26, 2026 at 2:19 PM Jim Mattson <jmattson@google.com> wrote:
>
> On Tue, Feb 17, 2026 at 3:27 PM Sean Christopherson <seanjc@google.com> wrote:
>
> > Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
> > (to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT should
> > be fully forked between VMX and SVM.  As of this patch, it's not just misleading,
> > it's actively dangerous since calling kvm_get_msr_common() for SVM would get the
> > wrong value.
>
> Though I included this change in v5 and v6, TIL that TDX calls
> kvm_[gs]et_msr_common(MSR_IA32_CR_PAT), so the common handling is not
> fully forked.

Do you plan to drop this patch or add PAT handling in
tdx_{get/set}_msr()? If you'll drop it, maybe add a warning if
kvm_[gs]et_msr_common(MSR_IA32_CR_PAT) is called from SVM?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-03-26 21:26                 ` Yosry Ahmed
@ 2026-03-26 21:56                   ` Jim Mattson
  2026-03-26 21:59                     ` Yosry Ahmed
  0 siblings, 1 reply; 34+ messages in thread
From: Jim Mattson @ 2026-03-26 21:56 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Sean Christopherson, Yosry Ahmed, Paolo Bonzini, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Shuah Khan, kvm, linux-kernel, linux-kselftest

On Thu, Mar 26, 2026 at 2:26 PM Yosry Ahmed <yosry@kernel.org> wrote:
>
> On Thu, Mar 26, 2026 at 2:19 PM Jim Mattson <jmattson@google.com> wrote:
> >
> > On Tue, Feb 17, 2026 at 3:27 PM Sean Christopherson <seanjc@google.com> wrote:
> >
> > > Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
> > > (to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT should
> > > be fully forked between VMX and SVM.  As of this patch, it's not just misleading,
> > > it's actively dangerous since calling kvm_get_msr_common() for SVM would get the
> > > wrong value.
> >
> > Though I included this change in v5 and v6, TIL that TDX calls
> > kvm_[gs]et_msr_common(MSR_IA32_CR_PAT), so the common handling is not
> > fully forked.
>
> Do you plan to drop this patch or add PAT handling in
> tdx_{get/set}_msr()? If you'll drop it, maybe add a warning if
> kvm_[gs]et_msr_common(MSR_IA32_CR_PAT) is called from SVM?

I plan to leave the MSR_IA32_CR_PAT code in kvm_[gs]et_msr_common().
Replicating the code in tdx.c seems like the wrong direction to go.

There's no precedent that I can see for checking the vendor module in
common code (though we do have kvm_x86_ops.name). I could add a
warning if invoked with (vcpu->arch.efer & EFER_SVME). WDYT?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT
  2026-03-26 21:56                   ` Jim Mattson
@ 2026-03-26 21:59                     ` Yosry Ahmed
  0 siblings, 0 replies; 34+ messages in thread
From: Yosry Ahmed @ 2026-03-26 21:59 UTC (permalink / raw)
  To: Jim Mattson
  Cc: Sean Christopherson, Yosry Ahmed, Paolo Bonzini, Thomas Gleixner,
	Ingo Molnar, Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Shuah Khan, kvm, linux-kernel, linux-kselftest

On Thu, Mar 26, 2026 at 2:57 PM Jim Mattson <jmattson@google.com> wrote:
>
> On Thu, Mar 26, 2026 at 2:26 PM Yosry Ahmed <yosry@kernel.org> wrote:
> >
> > On Thu, Mar 26, 2026 at 2:19 PM Jim Mattson <jmattson@google.com> wrote:
> > >
> > > On Tue, Feb 17, 2026 at 3:27 PM Sean Christopherson <seanjc@google.com> wrote:
> > >
> > > > Side topic, either as a prep patch (to duplicate code) or as a follow-up patch
> > > > (to move the PAT handling in x86.c to vmx.c), the "common" handling of PAT should
> > > > be fully forked between VMX and SVM.  As of this patch, it's not just misleading,
> > > > it's actively dangerous since calling kvm_get_msr_common() for SVM would get the
> > > > wrong value.
> > >
> > > Though I included this change in v5 and v6, TIL that TDX calls
> > > kvm_[gs]et_msr_common(MSR_IA32_CR_PAT), so the common handling is not
> > > fully forked.
> >
> > Do you plan to drop this patch or add PAT handling in
> > tdx_{get/set}_msr()? If you'll drop it, maybe add a warning if
> > kvm_[gs]et_msr_common(MSR_IA32_CR_PAT) is called from SVM?
>
> I plan to leave the MSR_IA32_CR_PAT code in kvm_[gs]et_msr_common().
> Replicating the code in tdx.c seems like the wrong direction to go.
>
> There's no precedent that I can see for checking the vendor module in
> common code (though we do have kvm_x86_ops.name). I could add a
> warning if invoked with (vcpu->arch.efer & EFER_SVME). WDYT?

Yeah this should work. I assume Sean can decide to drop it when
applying if he dislikes it, but given that he said the code is
misleading and it would be dangerous if we end up calling from SVM --
I personally think the warning will act as documentation for the
former and safeguard for the latter.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2026-03-26 22:00 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-12 15:58 [PATCH v4 0/8] KVM: x86: nSVM: Improve PAT virtualization Jim Mattson
2026-02-12 15:58 ` [PATCH v4 1/8] KVM: x86: nSVM: Clear VMCB_NPT clean bit when updating hPAT from guest mode Jim Mattson
2026-02-13  0:17   ` Yosry Ahmed
2026-02-13 15:26     ` Sean Christopherson
2026-02-13 15:32       ` Yosry Ahmed
2026-02-13 15:46         ` Jim Mattson
2026-02-12 15:58 ` [PATCH v4 2/8] KVM: x86: nSVM: Cache and validate vmcb12 g_pat Jim Mattson
2026-02-13  0:22   ` Yosry Ahmed
2026-02-20 22:26     ` Jim Mattson
2026-02-20 23:25       ` Yosry Ahmed
2026-02-12 15:58 ` [PATCH v4 3/8] KVM: x86: nSVM: Set vmcb02.g_pat correctly for nested NPT Jim Mattson
2026-02-13  0:27   ` Yosry Ahmed
2026-02-12 15:58 ` [PATCH v4 4/8] KVM: x86: nSVM: Redirect IA32_PAT accesses to either hPAT or gPAT Jim Mattson
2026-02-13  0:30   ` Yosry Ahmed
2026-02-13 15:20     ` Sean Christopherson
2026-02-13 15:42       ` Jim Mattson
2026-02-13 22:19         ` Sean Christopherson
2026-02-13 23:31           ` Jim Mattson
2026-02-17 23:27             ` Sean Christopherson
2026-02-17 23:40               ` Yosry Ahmed
2026-02-17 23:44                 ` Sean Christopherson
2026-03-26 21:18               ` Jim Mattson
2026-03-26 21:26                 ` Yosry Ahmed
2026-03-26 21:56                   ` Jim Mattson
2026-03-26 21:59                     ` Yosry Ahmed
2026-02-13 15:43       ` Yosry Ahmed
2026-02-13 15:44         ` Yosry Ahmed
2026-02-12 15:58 ` [PATCH v4 5/8] KVM: x86: nSVM: Save gPAT to vmcb12.g_pat on VMEXIT Jim Mattson
2026-02-13  0:33   ` Yosry Ahmed
2026-02-12 15:58 ` [PATCH v4 6/8] KVM: x86: nSVM: Save/restore gPAT with KVM_{GET,SET}_NESTED_STATE Jim Mattson
2026-02-13  0:36   ` Yosry Ahmed
2026-02-12 15:58 ` [PATCH v4 7/8] KVM: x86: nSVM: Handle restore of legacy nested state Jim Mattson
2026-02-13  0:38   ` Yosry Ahmed
2026-02-12 15:58 ` [PATCH v4 8/8] KVM: selftests: nSVM: Add svm_nested_pat test Jim Mattson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox