* [PATCH 01/11] KVM: SVM: Truncate INVLPGA address in compatibility mode
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 02/11] KVM: x86/xen: Bug the VM if 32-bit KVM observes a 64-bit mode hypercall Sean Christopherson
` (9 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Check for full 64-bit mode, not just long mode, when truncating the
virtual address as part of INVLPGA emulation. Compatibility mode doesn't
support 64-bit addressing.
Note, the FIXME still applies, e.g. if the guest deliberately targeted
EAX while in 64-bit via an address size override. That flaw isn't worth
fixing as it would require decoding the code stream, which would open a
an entirely different can of worms, and in practice no sane guest would
shove garbage into RAX[63:32] and execute INVLPGA.
Note #2, VMSAVE, VMLOAD, and VMRUN all suffer from the same architectural
flaw of not providing the full linear address in a VMCB exit information
field, because, quoting the APM verbatim:
the linear address is available directly from the guest rAX register
(VMSAVE, VMLOAD, and VMRUN take a physical address, but they're behavior
with respect to rAX is otherwise identical).
Fixes: bc9eff67fc35 ("KVM: SVM: Use default rAX size for INVLPGA emulation")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/svm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index e7fdd7a9c280..a1b2e4152afe 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2385,7 +2385,7 @@ static int invlpga_interception(struct kvm_vcpu *vcpu)
return 1;
/* FIXME: Handle an address size prefix. */
- if (!is_long_mode(vcpu))
+ if (!is_64_bit_mode(vcpu))
gva = (u32)gva;
trace_kvm_invlpga(to_svm(vcpu)->vmcb->save.rip, asid, gva);
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 02/11] KVM: x86/xen: Bug the VM if 32-bit KVM observes a 64-bit mode hypercall
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
2026-04-09 23:56 ` [PATCH 01/11] KVM: SVM: Truncate INVLPGA address in compatibility mode Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 03/11] KVM: x86/xen: Don't truncate RAX when handling hypercall from protected guest Sean Christopherson
` (8 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Bug the VM if 32-bit KVM attempts to handle a 64-bit hypercall, primarily
so that a future change to set "input" in mode-specific code doesn't
trigger a false positive warn=>error:
arch/x86/kvm/xen.c:1687:6: error: variable 'input' is used uninitialized
whenever 'if' condition is false [-Werror,-Wsometimes-uninitialized]
1687 | if (!longmode) {
| ^~~~~~~~~
arch/x86/kvm/xen.c:1708:31: note: uninitialized use occurs here
1708 | trace_kvm_xen_hypercall(cpl, input, params[0], params[1], params[2],
| ^~~~~
x86/kvm/xen.c:1687:2: note: remove the 'if' if its condition is always true
1687 | if (!longmode) {
| ^~~~~~~~~~~~~~
arch/x86/kvm/xen.c:1677:11: note: initialize the variable 'input' to silence this warning
1677 | u64 input, params[6], r = -ENOSYS;
| ^
1 error generated.
Note, params[] also has the same flaw, but -Wsometimes-uninitialized
doesn't seem to be enforced for arrays, presumably because it's difficult
to avoid false positives on specific entries.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/xen.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 91fd3673c09a..6d9be74bb673 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1694,16 +1694,19 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
params[4] = (u32)kvm_rdi_read(vcpu);
params[5] = (u32)kvm_rbp_read(vcpu);
}
-#ifdef CONFIG_X86_64
else {
+#ifdef CONFIG_X86_64
params[0] = (u64)kvm_rdi_read(vcpu);
params[1] = (u64)kvm_rsi_read(vcpu);
params[2] = (u64)kvm_rdx_read(vcpu);
params[3] = (u64)kvm_r10_read(vcpu);
params[4] = (u64)kvm_r8_read(vcpu);
params[5] = (u64)kvm_r9_read(vcpu);
- }
+#else
+ KVM_BUG_ON(1, vcpu->kvm);
+ return -EIO;
#endif
+ }
cpl = kvm_x86_call(get_cpl)(vcpu);
trace_kvm_xen_hypercall(cpl, input, params[0], params[1], params[2],
params[3], params[4], params[5]);
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 03/11] KVM: x86/xen: Don't truncate RAX when handling hypercall from protected guest
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
2026-04-09 23:56 ` [PATCH 01/11] KVM: SVM: Truncate INVLPGA address in compatibility mode Sean Christopherson
2026-04-09 23:56 ` [PATCH 02/11] KVM: x86/xen: Bug the VM if 32-bit KVM observes a 64-bit mode hypercall Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 04/11] KVM: VMX: Read 32-bit GPR values for ENCLS instructions outside of 64-bit mode Sean Christopherson
` (7 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Don't truncate RAX when handling a Xen hypercall for a guest with protected
state, as KVM's ABI is to assume the guest is in 64-bit for such cases
(the guest leaving garbage in 63:32 after a transition to 32-bit mode is
far less likely than 63:32 being necessary to complete the hypercall).
Fixes: b5aead0064f3 ("KVM: x86: Assume a 64-bit hypercall for guests with protected state")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/xen.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 6d9be74bb673..895095dc684e 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1678,15 +1678,14 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
bool handled = false;
u8 cpl;
- input = (u64)kvm_register_read(vcpu, VCPU_REGS_RAX);
-
/* Hyper-V hypercalls get bit 31 set in EAX */
- if ((input & 0x80000000) &&
+ if ((kvm_rax_read(vcpu) & 0x80000000) &&
kvm_hv_hypercall_enabled(vcpu))
return kvm_hv_hypercall(vcpu);
longmode = is_64_bit_hypercall(vcpu);
if (!longmode) {
+ input = (u32)kvm_rax_read(vcpu);
params[0] = (u32)kvm_rbx_read(vcpu);
params[1] = (u32)kvm_rcx_read(vcpu);
params[2] = (u32)kvm_rdx_read(vcpu);
@@ -1696,6 +1695,7 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
}
else {
#ifdef CONFIG_X86_64
+ input = (u64)kvm_rax_read(vcpu);
params[0] = (u64)kvm_rdi_read(vcpu);
params[1] = (u64)kvm_rsi_read(vcpu);
params[2] = (u64)kvm_rdx_read(vcpu);
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 04/11] KVM: VMX: Read 32-bit GPR values for ENCLS instructions outside of 64-bit mode
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (2 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 03/11] KVM: x86/xen: Don't truncate RAX when handling hypercall from protected guest Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 05/11] KVM: x86: Trace hypercall register *after* truncating values for 32-bit Sean Christopherson
` (6 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
When getting register values for ENCLS emulation, use kvm_register_read()
instead of kvm_<reg>_read() so that bits 63:32 of the register are dropped
if the guest is in 32-bit mode.
Note, the misleading/surprising behavior of kvm_<reg>_read() being "raw"
variants under the hood will be addressed once all non-benign bugs are
fixed.
Fixes: 70210c044b4e ("KVM: VMX: Add SGX ENCLS[ECREATE] handler to enforce CPUID restrictions")
Fixes: b6f084ca5538 ("KVM: VMX: Add ENCLS[EINIT] handler to support SGX Launch Control (LC)")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/sgx.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index df1d0cf76947..4c61fc33f764 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -225,8 +225,8 @@ static int handle_encls_ecreate(struct kvm_vcpu *vcpu)
struct x86_exception ex;
int r;
- if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 32, 32, &pageinfo_gva) ||
- sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva))
+ if (sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RBX), 32, 32, &pageinfo_gva) ||
+ sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RCX), 4096, 4096, &secs_gva))
return 1;
/*
@@ -302,9 +302,9 @@ static int handle_encls_einit(struct kvm_vcpu *vcpu)
gpa_t sig_gpa, secs_gpa, token_gpa;
int ret, trapnr;
- if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 1808, 4096, &sig_gva) ||
- sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva) ||
- sgx_get_encls_gva(vcpu, kvm_rdx_read(vcpu), 304, 512, &token_gva))
+ if (sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RBX), 1808, 4096, &sig_gva) ||
+ sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RCX), 4096, 4096, &secs_gva) ||
+ sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RDX), 304, 512, &token_gva))
return 1;
/*
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 05/11] KVM: x86: Trace hypercall register *after* truncating values for 32-bit
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (3 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 04/11] KVM: VMX: Read 32-bit GPR values for ENCLS instructions outside of 64-bit mode Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 06/11] KVM: x86: Move kvm_<reg>_{read,write}() definitions to x86.h Sean Christopherson
` (5 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
When tracing hypercalls, invoke the tracepoint *after* truncating the
register values for 32-bit guests so as not to record unused garbage (in
the extremely unlikely scenario that the guest left garbage in a register
after transitioning from 64-bit mode to 32-bit mode).
Fixes: 229456fc34b1 ("KVM: convert custom marker based tracing to event traces")
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0a1b63c63d1a..34ee79c1cbf3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -10438,8 +10438,6 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl,
++vcpu->stat.hypercalls;
- trace_kvm_hypercall(nr, a0, a1, a2, a3);
-
if (!op_64_bit) {
nr &= 0xFFFFFFFF;
a0 &= 0xFFFFFFFF;
@@ -10448,6 +10446,8 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl,
a3 &= 0xFFFFFFFF;
}
+ trace_kvm_hypercall(nr, a0, a1, a2, a3);
+
if (cpl) {
ret = -KVM_EPERM;
goto out;
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 06/11] KVM: x86: Move kvm_<reg>_{read,write}() definitions to x86.h
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (4 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 05/11] KVM: x86: Trace hypercall register *after* truncating values for 32-bit Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 07/11] KVM: x86: Add mode-aware versions of kvm_<reg>_{read,write}() helpers Sean Christopherson
` (4 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Move the direct GPR accessors to x86.h so that they can use
is_64_bit_mode().
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/kvm_cache_regs.h | 34 ----------------------------------
arch/x86/kvm/x86.h | 34 ++++++++++++++++++++++++++++++++++
2 files changed, 34 insertions(+), 34 deletions(-)
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index 8ddb01191d6f..efa23ed5b5d4 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -16,34 +16,6 @@
static_assert(!(KVM_POSSIBLE_CR0_GUEST_BITS & X86_CR0_PDPTR_BITS));
-#define BUILD_KVM_GPR_ACCESSORS(lname, uname) \
-static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\
-{ \
- return vcpu->arch.regs[VCPU_REGS_##uname]; \
-} \
-static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \
- unsigned long val) \
-{ \
- vcpu->arch.regs[VCPU_REGS_##uname] = val; \
-}
-BUILD_KVM_GPR_ACCESSORS(rax, RAX)
-BUILD_KVM_GPR_ACCESSORS(rbx, RBX)
-BUILD_KVM_GPR_ACCESSORS(rcx, RCX)
-BUILD_KVM_GPR_ACCESSORS(rdx, RDX)
-BUILD_KVM_GPR_ACCESSORS(rbp, RBP)
-BUILD_KVM_GPR_ACCESSORS(rsi, RSI)
-BUILD_KVM_GPR_ACCESSORS(rdi, RDI)
-#ifdef CONFIG_X86_64
-BUILD_KVM_GPR_ACCESSORS(r8, R8)
-BUILD_KVM_GPR_ACCESSORS(r9, R9)
-BUILD_KVM_GPR_ACCESSORS(r10, R10)
-BUILD_KVM_GPR_ACCESSORS(r11, R11)
-BUILD_KVM_GPR_ACCESSORS(r12, R12)
-BUILD_KVM_GPR_ACCESSORS(r13, R13)
-BUILD_KVM_GPR_ACCESSORS(r14, R14)
-BUILD_KVM_GPR_ACCESSORS(r15, R15)
-#endif
-
/*
* Using the register cache from interrupt context is generally not allowed, as
* caching a register and marking it available/dirty can't be done atomically,
@@ -217,12 +189,6 @@ static inline ulong kvm_read_cr4(struct kvm_vcpu *vcpu)
return kvm_read_cr4_bits(vcpu, ~0UL);
}
-static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
-{
- return (kvm_rax_read(vcpu) & -1u)
- | ((u64)(kvm_rdx_read(vcpu) & -1u) << 32);
-}
-
static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
{
vcpu->arch.hflags |= HF_GUEST_MASK;
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 38a905fa86de..c44154ed3f26 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -421,6 +421,40 @@ static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
return false;
}
+#define BUILD_KVM_GPR_ACCESSORS(lname, uname) \
+static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\
+{ \
+ return vcpu->arch.regs[VCPU_REGS_##uname]; \
+} \
+static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \
+ unsigned long val) \
+{ \
+ vcpu->arch.regs[VCPU_REGS_##uname] = val; \
+}
+BUILD_KVM_GPR_ACCESSORS(rax, RAX)
+BUILD_KVM_GPR_ACCESSORS(rbx, RBX)
+BUILD_KVM_GPR_ACCESSORS(rcx, RCX)
+BUILD_KVM_GPR_ACCESSORS(rdx, RDX)
+BUILD_KVM_GPR_ACCESSORS(rbp, RBP)
+BUILD_KVM_GPR_ACCESSORS(rsi, RSI)
+BUILD_KVM_GPR_ACCESSORS(rdi, RDI)
+#ifdef CONFIG_X86_64
+BUILD_KVM_GPR_ACCESSORS(r8, R8)
+BUILD_KVM_GPR_ACCESSORS(r9, R9)
+BUILD_KVM_GPR_ACCESSORS(r10, R10)
+BUILD_KVM_GPR_ACCESSORS(r11, R11)
+BUILD_KVM_GPR_ACCESSORS(r12, R12)
+BUILD_KVM_GPR_ACCESSORS(r13, R13)
+BUILD_KVM_GPR_ACCESSORS(r14, R14)
+BUILD_KVM_GPR_ACCESSORS(r15, R15)
+#endif
+
+static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
+{
+ return (kvm_rax_read(vcpu) & -1u)
+ | ((u64)(kvm_rdx_read(vcpu) & -1u) << 32);
+}
+
static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu, int reg)
{
unsigned long val = kvm_register_read_raw(vcpu, reg);
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 07/11] KVM: x86: Add mode-aware versions of kvm_<reg>_{read,write}() helpers
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (5 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 06/11] KVM: x86: Move kvm_<reg>_{read,write}() definitions to x86.h Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 08/11] KVM: x86: Drop non-raw kvm_<reg>_write() helpers Sean Christopherson
` (3 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Make kvm_<reg>_{read,write}() mode-aware (where the value is truncated to
32 bits if the vCPU isn't in 64-bit mode), and convert all the intentional
"raw" accesses to kvm_<reg>_{read,write}_raw() versions. To avoid
confusion and bikeshedding over whether or not explicit 32-bit accesses
should use the "raw" or mode-aware variants, add and use "e" versions, e.g.
for things like RDMSR, WRMSR, and CPUID, where the instruction uses only
only bits 31:0, regardless of mode.
No functional change intended (all use of "e" versions is for cases where
the value is already truncated due to bouncing through a u32).
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/cpuid.c | 12 ++--
arch/x86/kvm/hyperv.c | 24 ++++----
arch/x86/kvm/hyperv.h | 4 +-
arch/x86/kvm/svm/nested.c | 6 +-
arch/x86/kvm/svm/svm.c | 13 ++--
arch/x86/kvm/vmx/nested.c | 8 +--
arch/x86/kvm/vmx/sgx.c | 4 +-
arch/x86/kvm/vmx/tdx.c | 18 +++---
arch/x86/kvm/x86.c | 121 +++++++++++++++++++-------------------
arch/x86/kvm/x86.h | 88 +++++++++++++++++----------
arch/x86/kvm/xen.c | 30 +++++-----
11 files changed, 175 insertions(+), 153 deletions(-)
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index e69156b54cff..fe765f1c3b15 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -2165,13 +2165,13 @@ int kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
!kvm_require_cpl(vcpu, 0))
return 1;
- eax = kvm_rax_read(vcpu);
- ecx = kvm_rcx_read(vcpu);
+ eax = kvm_eax_read(vcpu);
+ ecx = kvm_ecx_read(vcpu);
kvm_cpuid(vcpu, &eax, &ebx, &ecx, &edx, false);
- kvm_rax_write(vcpu, eax);
- kvm_rbx_write(vcpu, ebx);
- kvm_rcx_write(vcpu, ecx);
- kvm_rdx_write(vcpu, edx);
+ kvm_eax_write(vcpu, eax);
+ kvm_ebx_write(vcpu, ebx);
+ kvm_ecx_write(vcpu, ecx);
+ kvm_edx_write(vcpu, edx);
return kvm_skip_emulated_instruction(vcpu);
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_cpuid);
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 9b140bbdc1d8..14e2fcf19def 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -2375,10 +2375,10 @@ static void kvm_hv_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result)
longmode = is_64_bit_hypercall(vcpu);
if (longmode)
- kvm_rax_write(vcpu, result);
+ kvm_rax_write_raw(vcpu, result);
else {
- kvm_rdx_write(vcpu, result >> 32);
- kvm_rax_write(vcpu, result & 0xffffffff);
+ kvm_edx_write(vcpu, result >> 32);
+ kvm_eax_write(vcpu, result & 0xffffffff);
}
}
@@ -2542,18 +2542,18 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
#ifdef CONFIG_X86_64
if (is_64_bit_hypercall(vcpu)) {
- hc.param = kvm_rcx_read(vcpu);
- hc.ingpa = kvm_rdx_read(vcpu);
- hc.outgpa = kvm_r8_read(vcpu);
+ hc.param = kvm_rcx_read_raw(vcpu);
+ hc.ingpa = kvm_rdx_read_raw(vcpu);
+ hc.outgpa = kvm_r8_read_raw(vcpu);
} else
#endif
{
- hc.param = ((u64)kvm_rdx_read(vcpu) << 32) |
- (kvm_rax_read(vcpu) & 0xffffffff);
- hc.ingpa = ((u64)kvm_rbx_read(vcpu) << 32) |
- (kvm_rcx_read(vcpu) & 0xffffffff);
- hc.outgpa = ((u64)kvm_rdi_read(vcpu) << 32) |
- (kvm_rsi_read(vcpu) & 0xffffffff);
+ hc.param = ((u64)kvm_rdx_read_raw(vcpu) << 32) |
+ (kvm_rdx_read_raw(vcpu) & 0xffffffff);
+ hc.ingpa = ((u64)kvm_rdx_read_raw(vcpu) << 32) |
+ (kvm_rdx_read_raw(vcpu) & 0xffffffff);
+ hc.outgpa = ((u64)kvm_rdx_read_raw(vcpu) << 32) |
+ (kvm_rdx_read_raw(vcpu) & 0xffffffff);
}
hc.code = hc.param & 0xffff;
diff --git a/arch/x86/kvm/hyperv.h b/arch/x86/kvm/hyperv.h
index 6301f79fcbae..65e89ed65349 100644
--- a/arch/x86/kvm/hyperv.h
+++ b/arch/x86/kvm/hyperv.h
@@ -232,8 +232,8 @@ static inline bool kvm_hv_is_tlb_flush_hcall(struct kvm_vcpu *vcpu)
if (!hv_vcpu)
return false;
- code = is_64_bit_hypercall(vcpu) ? kvm_rcx_read(vcpu) :
- kvm_rax_read(vcpu);
+ code = is_64_bit_hypercall(vcpu) ? kvm_rcx_read_raw(vcpu) :
+ kvm_eax_read(vcpu);
return (code == HVCALL_FLUSH_VIRTUAL_ADDRESS_SPACE ||
code == HVCALL_FLUSH_VIRTUAL_ADDRESS_LIST ||
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 961804df5f45..00de9375c836 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -757,7 +757,7 @@ static void nested_vmcb02_prepare_save(struct vcpu_svm *svm)
svm->vcpu.arch.cr2 = save->cr2;
- kvm_rax_write(vcpu, save->rax);
+ kvm_rax_write_raw(vcpu, save->rax);
kvm_rsp_write(vcpu, save->rsp);
kvm_rip_write(vcpu, save->rip);
@@ -1238,7 +1238,7 @@ static int nested_svm_vmexit_update_vmcb12(struct kvm_vcpu *vcpu)
vmcb12->save.rflags = kvm_get_rflags(vcpu);
vmcb12->save.rip = kvm_rip_read(vcpu);
vmcb12->save.rsp = kvm_rsp_read(vcpu);
- vmcb12->save.rax = kvm_rax_read(vcpu);
+ vmcb12->save.rax = kvm_rax_read_raw(vcpu);
vmcb12->save.dr7 = vmcb02->save.dr7;
vmcb12->save.dr6 = svm->vcpu.arch.dr6;
vmcb12->save.cpl = vmcb02->save.cpl;
@@ -1391,7 +1391,7 @@ void nested_svm_vmexit(struct vcpu_svm *svm)
svm_set_efer(vcpu, vmcb01->save.efer);
svm_set_cr0(vcpu, vmcb01->save.cr0 | X86_CR0_PE);
svm_set_cr4(vcpu, vmcb01->save.cr4);
- kvm_rax_write(vcpu, vmcb01->save.rax);
+ kvm_rax_write_raw(vcpu, vmcb01->save.rax);
kvm_rsp_write(vcpu, vmcb01->save.rsp);
kvm_rip_write(vcpu, vmcb01->save.rip);
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a1b2e4152afe..0e2e7a803d64 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2378,15 +2378,12 @@ static int clgi_interception(struct kvm_vcpu *vcpu)
static int invlpga_interception(struct kvm_vcpu *vcpu)
{
- gva_t gva = kvm_rax_read(vcpu);
- u32 asid = kvm_rcx_read(vcpu);
-
- if (nested_svm_check_permissions(vcpu))
- return 1;
-
/* FIXME: Handle an address size prefix. */
- if (!is_64_bit_mode(vcpu))
- gva = (u32)gva;
+ gva_t gva = kvm_rax_read(vcpu);
+ u32 asid = kvm_ecx_read(vcpu);
+
+ if (nested_svm_check_permissions(vcpu))
+ return 1;
trace_kvm_invlpga(to_svm(vcpu)->vmcb->save.rip, asid, gva);
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 3fe88f29be7a..9a1bf35fe7cd 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -6135,7 +6135,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
struct vmcs12 *vmcs12)
{
- u32 index = kvm_rcx_read(vcpu);
+ u32 index = kvm_ecx_read(vcpu);
u64 new_eptp;
if (WARN_ON_ONCE(!nested_cpu_has_ept(vmcs12)))
@@ -6169,7 +6169,7 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
{
struct vcpu_vmx *vmx = to_vmx(vcpu);
struct vmcs12 *vmcs12;
- u32 function = kvm_rax_read(vcpu);
+ u32 function = kvm_eax_read(vcpu);
/*
* VMFUNC should never execute cleanly while L1 is active; KVM supports
@@ -6291,7 +6291,7 @@ static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
exit_reason.basic == EXIT_REASON_MSR_WRITE_IMM)
msr_index = vmx_get_exit_qual(vcpu);
else
- msr_index = kvm_rcx_read(vcpu);
+ msr_index = kvm_ecx_read(vcpu);
/*
* The MSR_BITMAP page is divided into four 1024-byte bitmaps,
@@ -6401,7 +6401,7 @@ static bool nested_vmx_exit_handled_encls(struct kvm_vcpu *vcpu,
!nested_cpu_has2(vmcs12, SECONDARY_EXEC_ENCLS_EXITING))
return false;
- encls_leaf = kvm_rax_read(vcpu);
+ encls_leaf = kvm_eax_read(vcpu);
if (encls_leaf > 62)
encls_leaf = 63;
return vmcs12->encls_exiting_bitmap & BIT_ULL(encls_leaf);
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index 4c61fc33f764..4ca11e5ff4eb 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -352,7 +352,7 @@ static int handle_encls_einit(struct kvm_vcpu *vcpu)
rflags &= ~X86_EFLAGS_ZF;
vmx_set_rflags(vcpu, rflags);
- kvm_rax_write(vcpu, ret);
+ kvm_eax_write(vcpu, ret);
return kvm_skip_emulated_instruction(vcpu);
}
@@ -380,7 +380,7 @@ static inline bool sgx_enabled_in_guest_bios(struct kvm_vcpu *vcpu)
int handle_encls(struct kvm_vcpu *vcpu)
{
- u32 leaf = (u32)kvm_rax_read(vcpu);
+ u32 leaf = kvm_eax_read(vcpu);
if (!enable_sgx || !guest_cpu_cap_has(vcpu, X86_FEATURE_SGX) ||
!guest_cpu_cap_has(vcpu, X86_FEATURE_SGX1)) {
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index 1e47c194af53..9f6885d035a2 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1163,11 +1163,11 @@ static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
static int tdx_emulate_vmcall(struct kvm_vcpu *vcpu)
{
- kvm_rax_write(vcpu, to_tdx(vcpu)->vp_enter_args.r10);
- kvm_rbx_write(vcpu, to_tdx(vcpu)->vp_enter_args.r11);
- kvm_rcx_write(vcpu, to_tdx(vcpu)->vp_enter_args.r12);
- kvm_rdx_write(vcpu, to_tdx(vcpu)->vp_enter_args.r13);
- kvm_rsi_write(vcpu, to_tdx(vcpu)->vp_enter_args.r14);
+ kvm_rax_write_raw(vcpu, to_tdx(vcpu)->vp_enter_args.r10);
+ kvm_rbx_write_raw(vcpu, to_tdx(vcpu)->vp_enter_args.r11);
+ kvm_rcx_write_raw(vcpu, to_tdx(vcpu)->vp_enter_args.r12);
+ kvm_rdx_write_raw(vcpu, to_tdx(vcpu)->vp_enter_args.r13);
+ kvm_rsi_write_raw(vcpu, to_tdx(vcpu)->vp_enter_args.r14);
return __kvm_emulate_hypercall(vcpu, 0, complete_hypercall_exit);
}
@@ -2031,12 +2031,12 @@ int tdx_handle_exit(struct kvm_vcpu *vcpu, fastpath_t fastpath)
case EXIT_REASON_IO_INSTRUCTION:
return tdx_emulate_io(vcpu);
case EXIT_REASON_MSR_READ:
- kvm_rcx_write(vcpu, tdx->vp_enter_args.r12);
+ kvm_ecx_write(vcpu, tdx->vp_enter_args.r12);
return kvm_emulate_rdmsr(vcpu);
case EXIT_REASON_MSR_WRITE:
- kvm_rcx_write(vcpu, tdx->vp_enter_args.r12);
- kvm_rax_write(vcpu, tdx->vp_enter_args.r13 & -1u);
- kvm_rdx_write(vcpu, tdx->vp_enter_args.r13 >> 32);
+ kvm_ecx_write(vcpu, tdx->vp_enter_args.r12);
+ kvm_eax_write(vcpu, tdx->vp_enter_args.r13 & -1u);
+ kvm_edx_write(vcpu, tdx->vp_enter_args.r13 >> 32);
return kvm_emulate_wrmsr(vcpu);
case EXIT_REASON_EPT_MISCONFIG:
return tdx_emulate_mmio(vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 34ee79c1cbf3..e5d073763fc1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1313,7 +1313,7 @@ int kvm_emulate_xsetbv(struct kvm_vcpu *vcpu)
{
/* Note, #UD due to CR4.OSXSAVE=0 has priority over the intercept. */
if (kvm_x86_call(get_cpl)(vcpu) != 0 ||
- __kvm_set_xcr(vcpu, kvm_rcx_read(vcpu), kvm_read_edx_eax(vcpu))) {
+ __kvm_set_xcr(vcpu, kvm_ecx_read(vcpu), kvm_read_edx_eax(vcpu))) {
kvm_inject_gp(vcpu, 0);
return 1;
}
@@ -1602,7 +1602,7 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_get_dr);
int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu)
{
- u32 pmc = kvm_rcx_read(vcpu);
+ u32 pmc = kvm_ecx_read(vcpu);
u64 data;
if (kvm_pmu_rdpmc(vcpu, pmc, &data)) {
@@ -1610,8 +1610,8 @@ int kvm_emulate_rdpmc(struct kvm_vcpu *vcpu)
return 1;
}
- kvm_rax_write(vcpu, (u32)data);
- kvm_rdx_write(vcpu, data >> 32);
+ kvm_eax_write(vcpu, (u32)data);
+ kvm_edx_write(vcpu, data >> 32);
return kvm_skip_emulated_instruction(vcpu);
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_rdpmc);
@@ -2058,8 +2058,8 @@ EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_msr_write);
static void complete_userspace_rdmsr(struct kvm_vcpu *vcpu)
{
if (!vcpu->run->msr.error) {
- kvm_rax_write(vcpu, (u32)vcpu->run->msr.data);
- kvm_rdx_write(vcpu, vcpu->run->msr.data >> 32);
+ kvm_eax_write(vcpu, (u32)vcpu->run->msr.data);
+ kvm_edx_write(vcpu, vcpu->run->msr.data >> 32);
}
}
@@ -2140,8 +2140,8 @@ static int __kvm_emulate_rdmsr(struct kvm_vcpu *vcpu, u32 msr, int reg,
trace_kvm_msr_read(msr, data);
if (reg < 0) {
- kvm_rax_write(vcpu, data & -1u);
- kvm_rdx_write(vcpu, (data >> 32) & -1u);
+ kvm_eax_write(vcpu, data & -1u);
+ kvm_edx_write(vcpu, (data >> 32) & -1u);
} else {
kvm_register_write(vcpu, reg, data);
}
@@ -2158,7 +2158,7 @@ static int __kvm_emulate_rdmsr(struct kvm_vcpu *vcpu, u32 msr, int reg,
int kvm_emulate_rdmsr(struct kvm_vcpu *vcpu)
{
- return __kvm_emulate_rdmsr(vcpu, kvm_rcx_read(vcpu), -1,
+ return __kvm_emulate_rdmsr(vcpu, kvm_ecx_read(vcpu), -1,
complete_fast_rdmsr);
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_rdmsr);
@@ -2194,7 +2194,7 @@ static int __kvm_emulate_wrmsr(struct kvm_vcpu *vcpu, u32 msr, u64 data)
int kvm_emulate_wrmsr(struct kvm_vcpu *vcpu)
{
- return __kvm_emulate_wrmsr(vcpu, kvm_rcx_read(vcpu),
+ return __kvm_emulate_wrmsr(vcpu, kvm_ecx_read(vcpu),
kvm_read_edx_eax(vcpu));
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(kvm_emulate_wrmsr);
@@ -2304,7 +2304,7 @@ static fastpath_t __handle_fastpath_wrmsr(struct kvm_vcpu *vcpu, u32 msr, u64 da
fastpath_t handle_fastpath_wrmsr(struct kvm_vcpu *vcpu)
{
- return __handle_fastpath_wrmsr(vcpu, kvm_rcx_read(vcpu),
+ return __handle_fastpath_wrmsr(vcpu, kvm_ecx_read(vcpu),
kvm_read_edx_eax(vcpu));
}
EXPORT_SYMBOL_FOR_KVM_INTERNAL(handle_fastpath_wrmsr);
@@ -9699,7 +9699,7 @@ static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
unsigned short port)
{
- unsigned long val = kvm_rax_read(vcpu);
+ unsigned long val = kvm_rax_read_raw(vcpu);
int ret = emulator_pio_out(vcpu, size, port, &val, 1);
if (ret)
@@ -9735,10 +9735,10 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
}
/* For size less than 4 we merge, else we zero extend */
- val = (vcpu->arch.pio.size < 4) ? kvm_rax_read(vcpu) : 0;
+ val = (vcpu->arch.pio.size < 4) ? kvm_rax_read_raw(vcpu) : 0;
complete_emulator_pio_in(vcpu, &val);
- kvm_rax_write(vcpu, val);
+ kvm_rax_write_raw(vcpu, val);
return kvm_skip_emulated_instruction(vcpu);
}
@@ -9750,11 +9750,11 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
int ret;
/* For size less than 4 we merge, else we zero extend */
- val = (size < 4) ? kvm_rax_read(vcpu) : 0;
+ val = (size < 4) ? kvm_rax_read_raw(vcpu) : 0;
ret = emulator_pio_in(vcpu, size, port, &val, 1);
if (ret) {
- kvm_rax_write(vcpu, val);
+ kvm_rax_write_raw(vcpu, val);
return ret;
}
@@ -10421,29 +10421,30 @@ static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
if (!is_64_bit_hypercall(vcpu))
ret = (u32)ret;
- kvm_rax_write(vcpu, ret);
+ kvm_rax_write_raw(vcpu, ret);
return kvm_skip_emulated_instruction(vcpu);
}
int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl,
int (*complete_hypercall)(struct kvm_vcpu *))
{
- unsigned long ret;
- unsigned long nr = kvm_rax_read(vcpu);
- unsigned long a0 = kvm_rbx_read(vcpu);
- unsigned long a1 = kvm_rcx_read(vcpu);
- unsigned long a2 = kvm_rdx_read(vcpu);
- unsigned long a3 = kvm_rsi_read(vcpu);
int op_64_bit = is_64_bit_hypercall(vcpu);
+ unsigned long ret, nr, a0, a1, a2, a3;
++vcpu->stat.hypercalls;
- if (!op_64_bit) {
- nr &= 0xFFFFFFFF;
- a0 &= 0xFFFFFFFF;
- a1 &= 0xFFFFFFFF;
- a2 &= 0xFFFFFFFF;
- a3 &= 0xFFFFFFFF;
+ if (op_64_bit) {
+ nr = kvm_rax_read_raw(vcpu);
+ a0 = kvm_rbx_read_raw(vcpu);
+ a1 = kvm_rcx_read_raw(vcpu);
+ a2 = kvm_rdx_read_raw(vcpu);
+ a3 = kvm_rsi_read_raw(vcpu);
+ } else {
+ nr = kvm_eax_read(vcpu);
+ a0 = kvm_ebx_read(vcpu);
+ a1 = kvm_ecx_read(vcpu);
+ a2 = kvm_edx_read(vcpu);
+ a3 = kvm_esi_read(vcpu);
}
trace_kvm_hypercall(nr, a0, a1, a2, a3);
@@ -12144,23 +12145,23 @@ static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
emulator_writeback_register_cache(vcpu->arch.emulate_ctxt);
vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
}
- regs->rax = kvm_rax_read(vcpu);
- regs->rbx = kvm_rbx_read(vcpu);
- regs->rcx = kvm_rcx_read(vcpu);
- regs->rdx = kvm_rdx_read(vcpu);
- regs->rsi = kvm_rsi_read(vcpu);
- regs->rdi = kvm_rdi_read(vcpu);
+ regs->rax = kvm_rax_read_raw(vcpu);
+ regs->rbx = kvm_rbx_read_raw(vcpu);
+ regs->rcx = kvm_rcx_read_raw(vcpu);
+ regs->rdx = kvm_rdx_read_raw(vcpu);
+ regs->rsi = kvm_rsi_read_raw(vcpu);
+ regs->rdi = kvm_rdi_read_raw(vcpu);
regs->rsp = kvm_rsp_read(vcpu);
- regs->rbp = kvm_rbp_read(vcpu);
+ regs->rbp = kvm_rbp_read_raw(vcpu);
#ifdef CONFIG_X86_64
- regs->r8 = kvm_r8_read(vcpu);
- regs->r9 = kvm_r9_read(vcpu);
- regs->r10 = kvm_r10_read(vcpu);
- regs->r11 = kvm_r11_read(vcpu);
- regs->r12 = kvm_r12_read(vcpu);
- regs->r13 = kvm_r13_read(vcpu);
- regs->r14 = kvm_r14_read(vcpu);
- regs->r15 = kvm_r15_read(vcpu);
+ regs->r8 = kvm_r8_read_raw(vcpu);
+ regs->r9 = kvm_r9_read_raw(vcpu);
+ regs->r10 = kvm_r10_read_raw(vcpu);
+ regs->r11 = kvm_r11_read_raw(vcpu);
+ regs->r12 = kvm_r12_read_raw(vcpu);
+ regs->r13 = kvm_r13_read_raw(vcpu);
+ regs->r14 = kvm_r14_read_raw(vcpu);
+ regs->r15 = kvm_r15_read_raw(vcpu);
#endif
regs->rip = kvm_rip_read(vcpu);
@@ -12184,23 +12185,23 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
vcpu->arch.emulate_regs_need_sync_from_vcpu = true;
vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
- kvm_rax_write(vcpu, regs->rax);
- kvm_rbx_write(vcpu, regs->rbx);
- kvm_rcx_write(vcpu, regs->rcx);
- kvm_rdx_write(vcpu, regs->rdx);
- kvm_rsi_write(vcpu, regs->rsi);
- kvm_rdi_write(vcpu, regs->rdi);
+ kvm_rax_write_raw(vcpu, regs->rax);
+ kvm_rbx_write_raw(vcpu, regs->rbx);
+ kvm_rcx_write_raw(vcpu, regs->rcx);
+ kvm_rdx_write_raw(vcpu, regs->rdx);
+ kvm_rsi_write_raw(vcpu, regs->rsi);
+ kvm_rdi_write_raw(vcpu, regs->rdi);
kvm_rsp_write(vcpu, regs->rsp);
- kvm_rbp_write(vcpu, regs->rbp);
+ kvm_rbp_write_raw(vcpu, regs->rbp);
#ifdef CONFIG_X86_64
- kvm_r8_write(vcpu, regs->r8);
- kvm_r9_write(vcpu, regs->r9);
- kvm_r10_write(vcpu, regs->r10);
- kvm_r11_write(vcpu, regs->r11);
- kvm_r12_write(vcpu, regs->r12);
- kvm_r13_write(vcpu, regs->r13);
- kvm_r14_write(vcpu, regs->r14);
- kvm_r15_write(vcpu, regs->r15);
+ kvm_r8_write_raw(vcpu, regs->r8);
+ kvm_r9_write_raw(vcpu, regs->r9);
+ kvm_r10_write_raw(vcpu, regs->r10);
+ kvm_r11_write_raw(vcpu, regs->r11);
+ kvm_r12_write_raw(vcpu, regs->r12);
+ kvm_r13_write_raw(vcpu, regs->r13);
+ kvm_r14_write_raw(vcpu, regs->r14);
+ kvm_r15_write_raw(vcpu, regs->r15);
#endif
kvm_rip_write(vcpu, regs->rip);
@@ -13103,7 +13104,7 @@ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
* on RESET. But, go through the motions in case that's ever remedied.
*/
cpuid_0x1 = kvm_find_cpuid_entry(vcpu, 1);
- kvm_rdx_write(vcpu, cpuid_0x1 ? cpuid_0x1->eax : 0x600);
+ kvm_edx_write(vcpu, cpuid_0x1 ? cpuid_0x1->eax : 0x600);
kvm_x86_call(vcpu_reset)(vcpu, init_event);
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index c44154ed3f26..2550380fa79e 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -421,53 +421,77 @@ static inline bool vcpu_match_mmio_gpa(struct kvm_vcpu *vcpu, gpa_t gpa)
return false;
}
-#define BUILD_KVM_GPR_ACCESSORS(lname, uname) \
-static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\
-{ \
- return vcpu->arch.regs[VCPU_REGS_##uname]; \
-} \
-static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \
- unsigned long val) \
-{ \
- vcpu->arch.regs[VCPU_REGS_##uname] = val; \
+static __always_inline unsigned long kvm_reg_mode_mask(struct kvm_vcpu *vcpu)
+{
+#ifdef CONFIG_X86_64
+ return is_64_bit_mode(vcpu) ? GENMASK(63, 0) : GENMASK(31, 0);
+#else
+ return GENMASK(31, 0);
+#endif
+}
+
+#define __BUILD_KVM_GPR_ACCESSORS(lname, uname) \
+static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu) \
+{ \
+ return vcpu->arch.regs[VCPU_REGS_##uname] & kvm_reg_mode_mask(vcpu); \
+} \
+static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \
+ unsigned long val) \
+{ \
+ vcpu->arch.regs[VCPU_REGS_##uname] = val & kvm_reg_mode_mask(vcpu); \
+} \
+static __always_inline unsigned long kvm_##lname##_read_raw(struct kvm_vcpu *vcpu) \
+{ \
+ return vcpu->arch.regs[VCPU_REGS_##uname]; \
+} \
+static __always_inline void kvm_##lname##_write_raw(struct kvm_vcpu *vcpu, \
+ unsigned long val) \
+{ \
+ vcpu->arch.regs[VCPU_REGS_##uname] = val; \
}
-BUILD_KVM_GPR_ACCESSORS(rax, RAX)
-BUILD_KVM_GPR_ACCESSORS(rbx, RBX)
-BUILD_KVM_GPR_ACCESSORS(rcx, RCX)
-BUILD_KVM_GPR_ACCESSORS(rdx, RDX)
-BUILD_KVM_GPR_ACCESSORS(rbp, RBP)
-BUILD_KVM_GPR_ACCESSORS(rsi, RSI)
-BUILD_KVM_GPR_ACCESSORS(rdi, RDI)
+#define BUILD_KVM_GPR_ACCESSORS(lname, uname) \
+static __always_inline u32 kvm_e##lname##_read(struct kvm_vcpu *vcpu) \
+{ \
+ return vcpu->arch.regs[VCPU_REGS_##uname]; \
+} \
+static __always_inline void kvm_e##lname##_write(struct kvm_vcpu *vcpu, u32 val) \
+{ \
+ vcpu->arch.regs[VCPU_REGS_##uname] = val; \
+} \
+__BUILD_KVM_GPR_ACCESSORS(r##lname, uname)
+
+BUILD_KVM_GPR_ACCESSORS(ax, RAX)
+BUILD_KVM_GPR_ACCESSORS(bx, RBX)
+BUILD_KVM_GPR_ACCESSORS(cx, RCX)
+BUILD_KVM_GPR_ACCESSORS(dx, RDX)
+BUILD_KVM_GPR_ACCESSORS(bp, RBP)
+BUILD_KVM_GPR_ACCESSORS(si, RSI)
+BUILD_KVM_GPR_ACCESSORS(di, RDI)
#ifdef CONFIG_X86_64
-BUILD_KVM_GPR_ACCESSORS(r8, R8)
-BUILD_KVM_GPR_ACCESSORS(r9, R9)
-BUILD_KVM_GPR_ACCESSORS(r10, R10)
-BUILD_KVM_GPR_ACCESSORS(r11, R11)
-BUILD_KVM_GPR_ACCESSORS(r12, R12)
-BUILD_KVM_GPR_ACCESSORS(r13, R13)
-BUILD_KVM_GPR_ACCESSORS(r14, R14)
-BUILD_KVM_GPR_ACCESSORS(r15, R15)
+__BUILD_KVM_GPR_ACCESSORS(r8, R8)
+__BUILD_KVM_GPR_ACCESSORS(r9, R9)
+__BUILD_KVM_GPR_ACCESSORS(r10, R10)
+__BUILD_KVM_GPR_ACCESSORS(r11, R11)
+__BUILD_KVM_GPR_ACCESSORS(r12, R12)
+__BUILD_KVM_GPR_ACCESSORS(r13, R13)
+__BUILD_KVM_GPR_ACCESSORS(r14, R14)
+__BUILD_KVM_GPR_ACCESSORS(r15, R15)
#endif
static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
{
- return (kvm_rax_read(vcpu) & -1u)
- | ((u64)(kvm_rdx_read(vcpu) & -1u) << 32);
+ return kvm_eax_read(vcpu) | (u64)(kvm_edx_read(vcpu)) << 32;
}
static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu, int reg)
{
- unsigned long val = kvm_register_read_raw(vcpu, reg);
-
- return is_64_bit_mode(vcpu) ? val : (u32)val;
+ return kvm_register_read_raw(vcpu, reg) & kvm_reg_mode_mask(vcpu);
}
static inline void kvm_register_write(struct kvm_vcpu *vcpu,
int reg, unsigned long val)
{
- if (!is_64_bit_mode(vcpu))
- val = (u32)val;
- return kvm_register_write_raw(vcpu, reg, val);
+ return kvm_register_write_raw(vcpu, reg, val & kvm_reg_mode_mask(vcpu));
}
static inline bool kvm_check_has_quirk(struct kvm *kvm, u64 quirk)
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 895095dc684e..e98fa3544bdd 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -1408,7 +1408,7 @@ int kvm_xen_hvm_config(struct kvm *kvm, struct kvm_xen_hvm_config *xhc)
static int kvm_xen_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result)
{
- kvm_rax_write(vcpu, result);
+ kvm_rax_write_raw(vcpu, result);
return kvm_skip_emulated_instruction(vcpu);
}
@@ -1685,23 +1685,23 @@ int kvm_xen_hypercall(struct kvm_vcpu *vcpu)
longmode = is_64_bit_hypercall(vcpu);
if (!longmode) {
- input = (u32)kvm_rax_read(vcpu);
- params[0] = (u32)kvm_rbx_read(vcpu);
- params[1] = (u32)kvm_rcx_read(vcpu);
- params[2] = (u32)kvm_rdx_read(vcpu);
- params[3] = (u32)kvm_rsi_read(vcpu);
- params[4] = (u32)kvm_rdi_read(vcpu);
- params[5] = (u32)kvm_rbp_read(vcpu);
+ input = kvm_eax_read(vcpu);
+ params[0] = kvm_ebx_read(vcpu);
+ params[1] = kvm_ecx_read(vcpu);
+ params[2] = kvm_edx_read(vcpu);
+ params[3] = kvm_esi_read(vcpu);
+ params[4] = kvm_edi_read(vcpu);
+ params[5] = kvm_ebp_read(vcpu);
}
else {
#ifdef CONFIG_X86_64
- input = (u64)kvm_rax_read(vcpu);
- params[0] = (u64)kvm_rdi_read(vcpu);
- params[1] = (u64)kvm_rsi_read(vcpu);
- params[2] = (u64)kvm_rdx_read(vcpu);
- params[3] = (u64)kvm_r10_read(vcpu);
- params[4] = (u64)kvm_r8_read(vcpu);
- params[5] = (u64)kvm_r9_read(vcpu);
+ input = (u64)kvm_rax_read_raw(vcpu);
+ params[0] = (u64)kvm_rdi_read_raw(vcpu);
+ params[1] = (u64)kvm_rsi_read_raw(vcpu);
+ params[2] = (u64)kvm_rdx_read_raw(vcpu);
+ params[3] = (u64)kvm_r10_read_raw(vcpu);
+ params[4] = (u64)kvm_r8_read_raw(vcpu);
+ params[5] = (u64)kvm_r9_read_raw(vcpu);
#else
KVM_BUG_ON(1, vcpu->kvm);
return -EIO;
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 08/11] KVM: x86: Drop non-raw kvm_<reg>_write() helpers
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (6 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 07/11] KVM: x86: Add mode-aware versions of kvm_<reg>_{read,write}() helpers Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 09/11] KVM: nSVM: Use kvm_rax_read() now that it's mode-aware Sean Christopherson
` (2 subsequent siblings)
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Drop the non-raw, mode-aware kvm_<reg>_write() helpers as there is no
usage in KVM, and in all likelihood there will never be usage in KVM as
use of hardcoded registers in instructions is uncommon, and *modifying*
hardcoded registers is practically unheard of. While there are a few
instructions that modify registers in mode-aware ways, e.g. REP string
and some ENCLS varieties, the odds of KVM needing to emulate such
instructions (outside of the fully emulator) are vanishingly small.
Drop kvm_<reg>_write() to prevent incorrect usage; _if_ a new instruction
comes along that needs to modify a hardcoded register, this can be
reverted.
No functional change intended.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.h | 5 -----
1 file changed, 5 deletions(-)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index 2550380fa79e..cebea89b296c 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -435,11 +435,6 @@ static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)
{ \
return vcpu->arch.regs[VCPU_REGS_##uname] & kvm_reg_mode_mask(vcpu); \
} \
-static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu, \
- unsigned long val) \
-{ \
- vcpu->arch.regs[VCPU_REGS_##uname] = val & kvm_reg_mode_mask(vcpu); \
-} \
static __always_inline unsigned long kvm_##lname##_read_raw(struct kvm_vcpu *vcpu) \
{ \
return vcpu->arch.regs[VCPU_REGS_##uname]; \
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 09/11] KVM: nSVM: Use kvm_rax_read() now that it's mode-aware
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (7 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 08/11] KVM: x86: Drop non-raw kvm_<reg>_write() helpers Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 10/11] Revert "KVM: VMX: Read 32-bit GPR values for ENCLS instructions outside of 64-bit mode" Sean Christopherson
2026-04-09 23:56 ` [PATCH 11/11] KVM: x86: Harden is_64_bit_hypercall() against bugs on 32-bit kernels Sean Christopherson
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Now that kvm_rax_read() truncates the output value to 32 bits if the
vCPU isn't in 64-bit mode, use it instead of the more verbose (and very
technically slower) kvm_register_read().
Note! VMLOAD, VMSAVE, and VMRUN emulation are still technically buggy,
as they can use EAX (versus RAX) in 64-bit mode via an operand size
prefix. Don't bother trying to handle that case, as it would require
decoding the code stream, which would open an entirely different can of
worms, and in practice no sane guest would shove garbage into RAX[63:32]
and then execute VMLOAD/VMSAVE/VMRUN with just EAX.
No functional change intended.
Cc: Yosry Ahmed <yosry@kernel.org>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/svm/nested.c | 2 +-
arch/x86/kvm/svm/svm.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index 00de9375c836..7bea5ad02805 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1113,7 +1113,7 @@ int nested_svm_vmrun(struct kvm_vcpu *vcpu)
if (WARN_ON_ONCE(!svm->nested.initialized))
return -EINVAL;
- vmcb12_gpa = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ vmcb12_gpa = kvm_rax_read(vcpu);
if (!page_address_valid(vcpu, vmcb12_gpa)) {
kvm_inject_gp(vcpu, 0);
return 1;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 0e2e7a803d64..79d5982cf294 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2187,7 +2187,7 @@ static int intr_interception(struct kvm_vcpu *vcpu)
static int vmload_vmsave_interception(struct kvm_vcpu *vcpu, bool vmload)
{
- u64 vmcb12_gpa = kvm_register_read(vcpu, VCPU_REGS_RAX);
+ u64 vmcb12_gpa = kvm_rax_read(vcpu);
struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb *vmcb12;
struct kvm_host_map map;
@@ -2295,7 +2295,7 @@ static int gp_interception(struct kvm_vcpu *vcpu)
if (nested_svm_check_permissions(vcpu))
return 1;
- if (!page_address_valid(vcpu, kvm_register_read(vcpu, VCPU_REGS_RAX)))
+ if (!page_address_valid(vcpu, kvm_rax_read(vcpu)))
goto reinject;
/*
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 10/11] Revert "KVM: VMX: Read 32-bit GPR values for ENCLS instructions outside of 64-bit mode"
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (8 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 09/11] KVM: nSVM: Use kvm_rax_read() now that it's mode-aware Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
2026-04-09 23:56 ` [PATCH 11/11] KVM: x86: Harden is_64_bit_hypercall() against bugs on 32-bit kernels Sean Christopherson
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Now that kvm_<reg>_read() are mode aware, i.e. are functionally equivalent
to kvm_register_read(), revert aback to the less verbose versions.
No functional change intended.
This reverts commit 60919eccf6764c71cef31a1afeaa1a36b8e5ab85.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/sgx.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
index 4ca11e5ff4eb..5476743b66e7 100644
--- a/arch/x86/kvm/vmx/sgx.c
+++ b/arch/x86/kvm/vmx/sgx.c
@@ -225,8 +225,8 @@ static int handle_encls_ecreate(struct kvm_vcpu *vcpu)
struct x86_exception ex;
int r;
- if (sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RBX), 32, 32, &pageinfo_gva) ||
- sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RCX), 4096, 4096, &secs_gva))
+ if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 32, 32, &pageinfo_gva) ||
+ sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva))
return 1;
/*
@@ -302,9 +302,9 @@ static int handle_encls_einit(struct kvm_vcpu *vcpu)
gpa_t sig_gpa, secs_gpa, token_gpa;
int ret, trapnr;
- if (sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RBX), 1808, 4096, &sig_gva) ||
- sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RCX), 4096, 4096, &secs_gva) ||
- sgx_get_encls_gva(vcpu, kvm_register_read(vcpu, VCPU_REGS_RDX), 304, 512, &token_gva))
+ if (sgx_get_encls_gva(vcpu, kvm_rbx_read(vcpu), 1808, 4096, &sig_gva) ||
+ sgx_get_encls_gva(vcpu, kvm_rcx_read(vcpu), 4096, 4096, &secs_gva) ||
+ sgx_get_encls_gva(vcpu, kvm_rdx_read(vcpu), 304, 512, &token_gva))
return 1;
/*
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread* [PATCH 11/11] KVM: x86: Harden is_64_bit_hypercall() against bugs on 32-bit kernels
2026-04-09 23:56 [PATCH 00/11] KVM: x86: Clean up kvm_<reg>_{read,write}() mess Sean Christopherson
` (9 preceding siblings ...)
2026-04-09 23:56 ` [PATCH 10/11] Revert "KVM: VMX: Read 32-bit GPR values for ENCLS instructions outside of 64-bit mode" Sean Christopherson
@ 2026-04-09 23:56 ` Sean Christopherson
10 siblings, 0 replies; 12+ messages in thread
From: Sean Christopherson @ 2026-04-09 23:56 UTC (permalink / raw)
To: Sean Christopherson, Paolo Bonzini, Vitaly Kuznetsov,
David Woodhouse, Paul Durrant
Cc: kvm, linux-kernel, Yosry Ahmed
Unconditionally return %false for is_64_bit_hypercall() on 32-bit kernels
to guard against incorrectly setting guest_state_protected, and because
in a (very) hypothetical world where 32-bit KVM supports protected guests,
assuming a hypercall was made in 64-bit mode is flat out wrong.
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/x86.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
index cebea89b296c..5a79ec5f5bad 100644
--- a/arch/x86/kvm/x86.h
+++ b/arch/x86/kvm/x86.h
@@ -271,12 +271,16 @@ static inline bool is_64_bit_mode(struct kvm_vcpu *vcpu)
static inline bool is_64_bit_hypercall(struct kvm_vcpu *vcpu)
{
+#ifdef CONFIG_X86_64
/*
* If running with protected guest state, the CS register is not
* accessible. The hypercall register values will have had to been
* provided in 64-bit mode, so assume the guest is in 64-bit.
*/
return vcpu->arch.guest_state_protected || is_64_bit_mode(vcpu);
+#else
+ return false;
+#endif
}
static inline bool x86_exception_has_error_code(unsigned int vector)
--
2.53.0.1213.gd9a14994de-goog
^ permalink raw reply related [flat|nested] 12+ messages in thread