kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs
@ 2019-04-30 17:36 Sean Christopherson
  2019-04-30 17:36 ` [PATCH 1/3] KVM: x86: Omit caching logic for " Sean Christopherson
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-04-30 17:36 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář, Joerg Roedel; +Cc: kvm

KVM's GPR caching logic is unconditionally emitted for all GPR accesses
(that go through the accessors), even when the register being accessed
is fixed and always available.  This bloats KVM due to the instructions
needed to test and set the available/dirty bitmaps, and to conditionally
invoke the .cache_reg() callback.  The latter is especially painful when
compiling with retpolines.

Eliminate the unnecessary overhead by:

 - Adding dedicated accessors for every GPR
 - Omitting the caching logic for GPRs that are always available
 - Preventing use of the unoptimized versions for fixed accesses

The last patch is an opportunistic clean up of VMX, which has gradually
acquired a bad habit of sprinkling in direct access to GPRs.

Sean Christopherson (3):
  KVM: x86: Omit caching logic for always-available GPRs
  KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs
  KVM: VMX: Use accessors for GPRs outside of dedicated caching logic

 arch/x86/kvm/cpuid.c          | 12 ++---
 arch/x86/kvm/hyperv.c         | 24 ++++-----
 arch/x86/kvm/kvm_cache_regs.h | 73 +++++++++++++++++++++++----
 arch/x86/kvm/svm.c            | 34 ++++++-------
 arch/x86/kvm/vmx/nested.c     | 18 +++----
 arch/x86/kvm/vmx/vmx.c        | 14 +++---
 arch/x86/kvm/x86.c            | 93 +++++++++++++++++------------------
 7 files changed, 159 insertions(+), 109 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] KVM: x86: Omit caching logic for always-available GPRs
  2019-04-30 17:36 [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Sean Christopherson
@ 2019-04-30 17:36 ` Sean Christopherson
  2019-04-30 17:36 ` [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs Sean Christopherson
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-04-30 17:36 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář, Joerg Roedel; +Cc: kvm

Except for RSP and RIP, which are held in VMX's VMCS, GPRs are always
treated "available and dirtly" on both VMX and SVM, i.e. are
unconditionally loaded/saved immediately before/after VM-Enter/VM-Exit.

Eliminating the unnecessary caching code reduces the size of KVM by a
non-trivial amount, much of which comes from the most common code paths.
E.g. on x86_64, kvm_emulate_cpuid() is reduced from 342 to 182 bytes and
kvm_emulate_hypercall() from 1362 to 1143, with the total size of KVM
dropping by ~1000 bytes.  With CONFIG_RETPOLINE=y, the numbers are even
more pronounced, e.g.: 353->182, 1418->1172 and well over 2000 bytes.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---

As an added bonus, this also gave me a legitimate way to get rid of the
funky wrapping of ": 0;" in complete_fast_pio_in(), which is probably my
favorite part of this patch.  ;-)

 arch/x86/kvm/cpuid.c          | 12 ++---
 arch/x86/kvm/hyperv.c         | 24 +++++-----
 arch/x86/kvm/kvm_cache_regs.h | 32 ++++++++++++-
 arch/x86/kvm/svm.c            | 26 +++++-----
 arch/x86/kvm/vmx/vmx.c        |  2 +-
 arch/x86/kvm/x86.c            | 89 +++++++++++++++++------------------
 6 files changed, 105 insertions(+), 80 deletions(-)

diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index fd3951638ae4..fc8ccf596624 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -962,13 +962,13 @@ int kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
 	if (cpuid_fault_enabled(vcpu) && !kvm_require_cpl(vcpu, 0))
 		return 1;
 
-	eax = kvm_register_read(vcpu, VCPU_REGS_RAX);
-	ecx = kvm_register_read(vcpu, VCPU_REGS_RCX);
+	eax = kvm_rax_read(vcpu);
+	ecx = kvm_rcx_read(vcpu);
 	kvm_cpuid(vcpu, &eax, &ebx, &ecx, &edx, true);
-	kvm_register_write(vcpu, VCPU_REGS_RAX, eax);
-	kvm_register_write(vcpu, VCPU_REGS_RBX, ebx);
-	kvm_register_write(vcpu, VCPU_REGS_RCX, ecx);
-	kvm_register_write(vcpu, VCPU_REGS_RDX, edx);
+	kvm_rax_write(vcpu, eax);
+	kvm_rbx_write(vcpu, ebx);
+	kvm_rcx_write(vcpu, ecx);
+	kvm_rdx_write(vcpu, edx);
 	return kvm_skip_emulated_instruction(vcpu);
 }
 EXPORT_SYMBOL_GPL(kvm_emulate_cpuid);
diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
index 421899f6ad7b..7868daceb25b 100644
--- a/arch/x86/kvm/hyperv.c
+++ b/arch/x86/kvm/hyperv.c
@@ -1526,10 +1526,10 @@ static void kvm_hv_hypercall_set_result(struct kvm_vcpu *vcpu, u64 result)
 
 	longmode = is_64_bit_mode(vcpu);
 	if (longmode)
-		kvm_register_write(vcpu, VCPU_REGS_RAX, result);
+		kvm_rax_write(vcpu, result);
 	else {
-		kvm_register_write(vcpu, VCPU_REGS_RDX, result >> 32);
-		kvm_register_write(vcpu, VCPU_REGS_RAX, result & 0xffffffff);
+		kvm_rdx_write(vcpu, result >> 32);
+		kvm_rax_write(vcpu, result & 0xffffffff);
 	}
 }
 
@@ -1602,18 +1602,18 @@ int kvm_hv_hypercall(struct kvm_vcpu *vcpu)
 	longmode = is_64_bit_mode(vcpu);
 
 	if (!longmode) {
-		param = ((u64)kvm_register_read(vcpu, VCPU_REGS_RDX) << 32) |
-			(kvm_register_read(vcpu, VCPU_REGS_RAX) & 0xffffffff);
-		ingpa = ((u64)kvm_register_read(vcpu, VCPU_REGS_RBX) << 32) |
-			(kvm_register_read(vcpu, VCPU_REGS_RCX) & 0xffffffff);
-		outgpa = ((u64)kvm_register_read(vcpu, VCPU_REGS_RDI) << 32) |
-			(kvm_register_read(vcpu, VCPU_REGS_RSI) & 0xffffffff);
+		param = ((u64)kvm_rdx_read(vcpu) << 32) |
+			(kvm_rax_read(vcpu) & 0xffffffff);
+		ingpa = ((u64)kvm_rbx_read(vcpu) << 32) |
+			(kvm_rcx_read(vcpu) & 0xffffffff);
+		outgpa = ((u64)kvm_rdi_read(vcpu) << 32) |
+			(kvm_rsi_read(vcpu) & 0xffffffff);
 	}
 #ifdef CONFIG_X86_64
 	else {
-		param = kvm_register_read(vcpu, VCPU_REGS_RCX);
-		ingpa = kvm_register_read(vcpu, VCPU_REGS_RDX);
-		outgpa = kvm_register_read(vcpu, VCPU_REGS_R8);
+		param = kvm_rcx_read(vcpu);
+		ingpa = kvm_rdx_read(vcpu);
+		outgpa = kvm_r8_read(vcpu);
 	}
 #endif
 
diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index f8f56a93358b..d179b7d7860d 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -9,6 +9,34 @@
 	(X86_CR4_PVI | X86_CR4_DE | X86_CR4_PCE | X86_CR4_OSFXSR  \
 	 | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_PGE)
 
+#define BUILD_KVM_GPR_ACCESSORS(lname, uname)				      \
+static __always_inline unsigned long kvm_##lname##_read(struct kvm_vcpu *vcpu)\
+{									      \
+	return vcpu->arch.regs[VCPU_REGS_##uname];			      \
+}									      \
+static __always_inline void kvm_##lname##_write(struct kvm_vcpu *vcpu,	      \
+						unsigned long val)	      \
+{									      \
+	vcpu->arch.regs[VCPU_REGS_##uname] = val;			      \
+}
+BUILD_KVM_GPR_ACCESSORS(rax, RAX)
+BUILD_KVM_GPR_ACCESSORS(rbx, RBX)
+BUILD_KVM_GPR_ACCESSORS(rcx, RCX)
+BUILD_KVM_GPR_ACCESSORS(rdx, RDX)
+BUILD_KVM_GPR_ACCESSORS(rbp, RBP)
+BUILD_KVM_GPR_ACCESSORS(rsi, RSI)
+BUILD_KVM_GPR_ACCESSORS(rdi, RDI)
+#ifdef CONFIG_X86_64
+BUILD_KVM_GPR_ACCESSORS(r8,  R8)
+BUILD_KVM_GPR_ACCESSORS(r9,  R9)
+BUILD_KVM_GPR_ACCESSORS(r10, R10)
+BUILD_KVM_GPR_ACCESSORS(r11, R11)
+BUILD_KVM_GPR_ACCESSORS(r12, R12)
+BUILD_KVM_GPR_ACCESSORS(r13, R13)
+BUILD_KVM_GPR_ACCESSORS(r14, R14)
+BUILD_KVM_GPR_ACCESSORS(r15, R15)
+#endif
+
 static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu,
 					      enum kvm_reg reg)
 {
@@ -83,8 +111,8 @@ static inline ulong kvm_read_cr4(struct kvm_vcpu *vcpu)
 
 static inline u64 kvm_read_edx_eax(struct kvm_vcpu *vcpu)
 {
-	return (kvm_register_read(vcpu, VCPU_REGS_RAX) & -1u)
-		| ((u64)(kvm_register_read(vcpu, VCPU_REGS_RDX) & -1u) << 32);
+	return (kvm_rax_read(vcpu) & -1u)
+		| ((u64)(kvm_rdx_read(vcpu) & -1u) << 32);
 }
 
 static inline void enter_guest_mode(struct kvm_vcpu *vcpu)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 406b558abfef..a855e9ec93d4 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2091,7 +2091,7 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
 	init_vmcb(svm);
 
 	kvm_cpuid(vcpu, &eax, &dummy, &dummy, &dummy, true);
-	kvm_register_write(vcpu, VCPU_REGS_RDX, eax);
+	kvm_rdx_write(vcpu, eax);
 
 	if (kvm_vcpu_apicv_active(vcpu) && !init_event)
 		avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
@@ -3408,7 +3408,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
 	} else {
 		(void)kvm_set_cr3(&svm->vcpu, hsave->save.cr3);
 	}
-	kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, hsave->save.rax);
+	kvm_rax_write(&svm->vcpu, hsave->save.rax);
 	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp);
 	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, hsave->save.rip);
 	svm->vmcb->save.dr7 = 0;
@@ -3516,7 +3516,7 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 	kvm_mmu_reset_context(&svm->vcpu);
 
 	svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2;
-	kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, nested_vmcb->save.rax);
+	kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax);
 	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, nested_vmcb->save.rsp);
 	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, nested_vmcb->save.rip);
 
@@ -3791,11 +3791,11 @@ static int invlpga_interception(struct vcpu_svm *svm)
 {
 	struct kvm_vcpu *vcpu = &svm->vcpu;
 
-	trace_kvm_invlpga(svm->vmcb->save.rip, kvm_register_read(&svm->vcpu, VCPU_REGS_RCX),
-			  kvm_register_read(&svm->vcpu, VCPU_REGS_RAX));
+	trace_kvm_invlpga(svm->vmcb->save.rip, kvm_rcx_read(&svm->vcpu),
+			  kvm_rax_read(&svm->vcpu));
 
 	/* Let's treat INVLPGA the same as INVLPG (can be optimized!) */
-	kvm_mmu_invlpg(vcpu, kvm_register_read(&svm->vcpu, VCPU_REGS_RAX));
+	kvm_mmu_invlpg(vcpu, kvm_rax_read(&svm->vcpu));
 
 	svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
 	return kvm_skip_emulated_instruction(&svm->vcpu);
@@ -3803,7 +3803,7 @@ static int invlpga_interception(struct vcpu_svm *svm)
 
 static int skinit_interception(struct vcpu_svm *svm)
 {
-	trace_kvm_skinit(svm->vmcb->save.rip, kvm_register_read(&svm->vcpu, VCPU_REGS_RAX));
+	trace_kvm_skinit(svm->vmcb->save.rip, kvm_rax_read(&svm->vcpu));
 
 	kvm_queue_exception(&svm->vcpu, UD_VECTOR);
 	return 1;
@@ -3817,7 +3817,7 @@ static int wbinvd_interception(struct vcpu_svm *svm)
 static int xsetbv_interception(struct vcpu_svm *svm)
 {
 	u64 new_bv = kvm_read_edx_eax(&svm->vcpu);
-	u32 index = kvm_register_read(&svm->vcpu, VCPU_REGS_RCX);
+	u32 index = kvm_rcx_read(&svm->vcpu);
 
 	if (kvm_set_xcr(&svm->vcpu, index, new_bv) == 0) {
 		svm->next_rip = kvm_rip_read(&svm->vcpu) + 3;
@@ -4213,7 +4213,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
 
 static int rdmsr_interception(struct vcpu_svm *svm)
 {
-	u32 ecx = kvm_register_read(&svm->vcpu, VCPU_REGS_RCX);
+	u32 ecx = kvm_rcx_read(&svm->vcpu);
 	struct msr_data msr_info;
 
 	msr_info.index = ecx;
@@ -4225,10 +4225,8 @@ static int rdmsr_interception(struct vcpu_svm *svm)
 	} else {
 		trace_kvm_msr_read(ecx, msr_info.data);
 
-		kvm_register_write(&svm->vcpu, VCPU_REGS_RAX,
-				   msr_info.data & 0xffffffff);
-		kvm_register_write(&svm->vcpu, VCPU_REGS_RDX,
-				   msr_info.data >> 32);
+		kvm_rax_write(&svm->vcpu, msr_info.data & 0xffffffff);
+		kvm_rdx_write(&svm->vcpu, msr_info.data >> 32);
 		svm->next_rip = kvm_rip_read(&svm->vcpu) + 2;
 		return kvm_skip_emulated_instruction(&svm->vcpu);
 	}
@@ -4422,7 +4420,7 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
 static int wrmsr_interception(struct vcpu_svm *svm)
 {
 	struct msr_data msr;
-	u32 ecx = kvm_register_read(&svm->vcpu, VCPU_REGS_RCX);
+	u32 ecx = kvm_rcx_read(&svm->vcpu);
 	u64 data = kvm_read_edx_eax(&svm->vcpu);
 
 	msr.data = data;
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d8f101b58ab8..5880c5c7388a 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4927,7 +4927,7 @@ static int handle_wbinvd(struct kvm_vcpu *vcpu)
 static int handle_xsetbv(struct kvm_vcpu *vcpu)
 {
 	u64 new_bv = kvm_read_edx_eax(vcpu);
-	u32 index = kvm_register_read(vcpu, VCPU_REGS_RCX);
+	u32 index = kvm_rcx_read(vcpu);
 
 	if (kvm_set_xcr(vcpu, index, new_bv) == 0)
 		return kvm_skip_emulated_instruction(vcpu);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index c09507057743..b6734c2d882f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1096,15 +1096,15 @@ EXPORT_SYMBOL_GPL(kvm_get_dr);
 
 bool kvm_rdpmc(struct kvm_vcpu *vcpu)
 {
-	u32 ecx = kvm_register_read(vcpu, VCPU_REGS_RCX);
+	u32 ecx = kvm_rcx_read(vcpu);
 	u64 data;
 	int err;
 
 	err = kvm_pmu_rdpmc(vcpu, ecx, &data);
 	if (err)
 		return err;
-	kvm_register_write(vcpu, VCPU_REGS_RAX, (u32)data);
-	kvm_register_write(vcpu, VCPU_REGS_RDX, data >> 32);
+	kvm_rax_write(vcpu, (u32)data);
+	kvm_rdx_write(vcpu, data >> 32);
 	return err;
 }
 EXPORT_SYMBOL_GPL(kvm_rdpmc);
@@ -6564,7 +6564,7 @@ static int complete_fast_pio_out(struct kvm_vcpu *vcpu)
 static int kvm_fast_pio_out(struct kvm_vcpu *vcpu, int size,
 			    unsigned short port)
 {
-	unsigned long val = kvm_register_read(vcpu, VCPU_REGS_RAX);
+	unsigned long val = kvm_rax_read(vcpu);
 	int ret = emulator_pio_out_emulated(&vcpu->arch.emulate_ctxt,
 					    size, port, &val, 1);
 
@@ -6588,8 +6588,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
 	}
 
 	/* For size less than 4 we merge, else we zero extend */
-	val = (vcpu->arch.pio.size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX)
-					: 0;
+	val = (vcpu->arch.pio.size < 4) ? kvm_rax_read(vcpu) : 0;
 
 	/*
 	 * Since vcpu->arch.pio.count == 1 let emulator_pio_in_emulated perform
@@ -6597,7 +6596,7 @@ static int complete_fast_pio_in(struct kvm_vcpu *vcpu)
 	 */
 	emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, vcpu->arch.pio.size,
 				 vcpu->arch.pio.port, &val, 1);
-	kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+	kvm_rax_write(vcpu, val);
 
 	return kvm_skip_emulated_instruction(vcpu);
 }
@@ -6609,12 +6608,12 @@ static int kvm_fast_pio_in(struct kvm_vcpu *vcpu, int size,
 	int ret;
 
 	/* For size less than 4 we merge, else we zero extend */
-	val = (size < 4) ? kvm_register_read(vcpu, VCPU_REGS_RAX) : 0;
+	val = (size < 4) ? kvm_rax_read(vcpu) : 0;
 
 	ret = emulator_pio_in_emulated(&vcpu->arch.emulate_ctxt, size, port,
 				       &val, 1);
 	if (ret) {
-		kvm_register_write(vcpu, VCPU_REGS_RAX, val);
+		kvm_rax_write(vcpu, val);
 		return ret;
 	}
 
@@ -7129,11 +7128,11 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 	if (kvm_hv_hypercall_enabled(vcpu->kvm))
 		return kvm_hv_hypercall(vcpu);
 
-	nr = kvm_register_read(vcpu, VCPU_REGS_RAX);
-	a0 = kvm_register_read(vcpu, VCPU_REGS_RBX);
-	a1 = kvm_register_read(vcpu, VCPU_REGS_RCX);
-	a2 = kvm_register_read(vcpu, VCPU_REGS_RDX);
-	a3 = kvm_register_read(vcpu, VCPU_REGS_RSI);
+	nr = kvm_rax_read(vcpu);
+	a0 = kvm_rbx_read(vcpu);
+	a1 = kvm_rcx_read(vcpu);
+	a2 = kvm_rdx_read(vcpu);
+	a3 = kvm_rsi_read(vcpu);
 
 	trace_kvm_hypercall(nr, a0, a1, a2, a3);
 
@@ -7174,7 +7173,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
 out:
 	if (!op_64_bit)
 		ret = (u32)ret;
-	kvm_register_write(vcpu, VCPU_REGS_RAX, ret);
+	kvm_rax_write(vcpu, ret);
 
 	++vcpu->stat.hypercalls;
 	return kvm_skip_emulated_instruction(vcpu);
@@ -8263,23 +8262,23 @@ static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 		emulator_writeback_register_cache(&vcpu->arch.emulate_ctxt);
 		vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
 	}
-	regs->rax = kvm_register_read(vcpu, VCPU_REGS_RAX);
-	regs->rbx = kvm_register_read(vcpu, VCPU_REGS_RBX);
-	regs->rcx = kvm_register_read(vcpu, VCPU_REGS_RCX);
-	regs->rdx = kvm_register_read(vcpu, VCPU_REGS_RDX);
-	regs->rsi = kvm_register_read(vcpu, VCPU_REGS_RSI);
-	regs->rdi = kvm_register_read(vcpu, VCPU_REGS_RDI);
+	regs->rax = kvm_rax_read(vcpu);
+	regs->rbx = kvm_rbx_read(vcpu);
+	regs->rcx = kvm_rcx_read(vcpu);
+	regs->rdx = kvm_rdx_read(vcpu);
+	regs->rsi = kvm_rsi_read(vcpu);
+	regs->rdi = kvm_rdi_read(vcpu);
 	regs->rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
-	regs->rbp = kvm_register_read(vcpu, VCPU_REGS_RBP);
+	regs->rbp = kvm_rbp_read(vcpu);
 #ifdef CONFIG_X86_64
-	regs->r8 = kvm_register_read(vcpu, VCPU_REGS_R8);
-	regs->r9 = kvm_register_read(vcpu, VCPU_REGS_R9);
-	regs->r10 = kvm_register_read(vcpu, VCPU_REGS_R10);
-	regs->r11 = kvm_register_read(vcpu, VCPU_REGS_R11);
-	regs->r12 = kvm_register_read(vcpu, VCPU_REGS_R12);
-	regs->r13 = kvm_register_read(vcpu, VCPU_REGS_R13);
-	regs->r14 = kvm_register_read(vcpu, VCPU_REGS_R14);
-	regs->r15 = kvm_register_read(vcpu, VCPU_REGS_R15);
+	regs->r8 = kvm_r8_read(vcpu);
+	regs->r9 = kvm_r9_read(vcpu);
+	regs->r10 = kvm_r10_read(vcpu);
+	regs->r11 = kvm_r11_read(vcpu);
+	regs->r12 = kvm_r12_read(vcpu);
+	regs->r13 = kvm_r13_read(vcpu);
+	regs->r14 = kvm_r14_read(vcpu);
+	regs->r15 = kvm_r15_read(vcpu);
 #endif
 
 	regs->rip = kvm_rip_read(vcpu);
@@ -8299,23 +8298,23 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	vcpu->arch.emulate_regs_need_sync_from_vcpu = true;
 	vcpu->arch.emulate_regs_need_sync_to_vcpu = false;
 
-	kvm_register_write(vcpu, VCPU_REGS_RAX, regs->rax);
-	kvm_register_write(vcpu, VCPU_REGS_RBX, regs->rbx);
-	kvm_register_write(vcpu, VCPU_REGS_RCX, regs->rcx);
-	kvm_register_write(vcpu, VCPU_REGS_RDX, regs->rdx);
-	kvm_register_write(vcpu, VCPU_REGS_RSI, regs->rsi);
-	kvm_register_write(vcpu, VCPU_REGS_RDI, regs->rdi);
+	kvm_rax_write(vcpu, regs->rax);
+	kvm_rbx_write(vcpu, regs->rbx);
+	kvm_rcx_write(vcpu, regs->rcx);
+	kvm_rdx_write(vcpu, regs->rdx);
+	kvm_rsi_write(vcpu, regs->rsi);
+	kvm_rdi_write(vcpu, regs->rdi);
 	kvm_register_write(vcpu, VCPU_REGS_RSP, regs->rsp);
-	kvm_register_write(vcpu, VCPU_REGS_RBP, regs->rbp);
+	kvm_rbp_write(vcpu, regs->rbp);
 #ifdef CONFIG_X86_64
-	kvm_register_write(vcpu, VCPU_REGS_R8, regs->r8);
-	kvm_register_write(vcpu, VCPU_REGS_R9, regs->r9);
-	kvm_register_write(vcpu, VCPU_REGS_R10, regs->r10);
-	kvm_register_write(vcpu, VCPU_REGS_R11, regs->r11);
-	kvm_register_write(vcpu, VCPU_REGS_R12, regs->r12);
-	kvm_register_write(vcpu, VCPU_REGS_R13, regs->r13);
-	kvm_register_write(vcpu, VCPU_REGS_R14, regs->r14);
-	kvm_register_write(vcpu, VCPU_REGS_R15, regs->r15);
+	kvm_r8_write(vcpu, regs->r8);
+	kvm_r9_write(vcpu, regs->r9);
+	kvm_r10_write(vcpu, regs->r10);
+	kvm_r11_write(vcpu, regs->r11);
+	kvm_r12_write(vcpu, regs->r12);
+	kvm_r13_write(vcpu, regs->r13);
+	kvm_r14_write(vcpu, regs->r14);
+	kvm_r15_write(vcpu, regs->r15);
 #endif
 
 	kvm_rip_write(vcpu, regs->rip);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs
  2019-04-30 17:36 [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Sean Christopherson
  2019-04-30 17:36 ` [PATCH 1/3] KVM: x86: Omit caching logic for " Sean Christopherson
@ 2019-04-30 17:36 ` Sean Christopherson
  2019-04-30 20:34   ` Paolo Bonzini
  2019-04-30 17:36 ` [PATCH 3/3] KVM: VMX: Use accessors for GPRs outside of dedicated caching logic Sean Christopherson
  2019-04-30 20:03 ` [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Paolo Bonzini
  3 siblings, 1 reply; 8+ messages in thread
From: Sean Christopherson @ 2019-04-30 17:36 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář, Joerg Roedel; +Cc: kvm

... to prevent future code from using the unoptimized generic accessors
when hardcoding access to always-available GPRs.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/kvm_cache_regs.h | 41 +++++++++++++++++++++++++++++------
 arch/x86/kvm/svm.c            |  8 +++----
 arch/x86/kvm/vmx/nested.c     | 12 +++++-----
 arch/x86/kvm/x86.c            |  4 ++--
 4 files changed, 46 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/kvm_cache_regs.h b/arch/x86/kvm/kvm_cache_regs.h
index d179b7d7860d..7074e9f2be79 100644
--- a/arch/x86/kvm/kvm_cache_regs.h
+++ b/arch/x86/kvm/kvm_cache_regs.h
@@ -37,8 +37,8 @@ BUILD_KVM_GPR_ACCESSORS(r14, R14)
 BUILD_KVM_GPR_ACCESSORS(r15, R15)
 #endif
 
-static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu,
-					      enum kvm_reg reg)
+static inline unsigned long __kvm_register_read(struct kvm_vcpu *vcpu,
+						enum kvm_reg reg)
 {
 	if (!test_bit(reg, (unsigned long *)&vcpu->arch.regs_avail))
 		kvm_x86_ops->cache_reg(vcpu, reg);
@@ -46,23 +46,50 @@ static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu,
 	return vcpu->arch.regs[reg];
 }
 
-static inline void kvm_register_write(struct kvm_vcpu *vcpu,
-				      enum kvm_reg reg,
-				      unsigned long val)
+static inline void __kvm_register_write(struct kvm_vcpu *vcpu,
+					enum kvm_reg reg,
+					unsigned long val)
 {
 	vcpu->arch.regs[reg] = val;
 	__set_bit(reg, (unsigned long *)&vcpu->arch.regs_dirty);
 	__set_bit(reg, (unsigned long *)&vcpu->arch.regs_avail);
 }
 
+static inline unsigned long kvm_register_read(struct kvm_vcpu *vcpu,
+					      enum kvm_reg reg)
+{
+	BUILD_BUG_ON(__builtin_constant_p(reg));
+
+	return __kvm_register_read(vcpu, reg);
+}
+
+static inline void kvm_register_write(struct kvm_vcpu *vcpu,
+				      enum kvm_reg reg,
+				      unsigned long val)
+{
+	BUILD_BUG_ON(__builtin_constant_p(reg));
+
+	__kvm_register_write(vcpu, reg, val);
+}
+
 static inline unsigned long kvm_rip_read(struct kvm_vcpu *vcpu)
 {
-	return kvm_register_read(vcpu, VCPU_REGS_RIP);
+	return __kvm_register_read(vcpu, VCPU_REGS_RIP);
 }
 
 static inline void kvm_rip_write(struct kvm_vcpu *vcpu, unsigned long val)
 {
-	kvm_register_write(vcpu, VCPU_REGS_RIP, val);
+	__kvm_register_write(vcpu, VCPU_REGS_RIP, val);
+}
+
+static inline unsigned long kvm_rsp_read(struct kvm_vcpu *vcpu)
+{
+	return __kvm_register_read(vcpu, VCPU_REGS_RSP);
+}
+
+static inline void kvm_rsp_write(struct kvm_vcpu *vcpu, unsigned long val)
+{
+	__kvm_register_write(vcpu, VCPU_REGS_RSP, val);
 }
 
 static inline u64 kvm_pdptr_read(struct kvm_vcpu *vcpu, int index)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index a855e9ec93d4..bfb5c9ac0e24 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3409,8 +3409,8 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
 		(void)kvm_set_cr3(&svm->vcpu, hsave->save.cr3);
 	}
 	kvm_rax_write(&svm->vcpu, hsave->save.rax);
-	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp);
-	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, hsave->save.rip);
+	kvm_rsp_write(&svm->vcpu, hsave->save.rsp);
+	kvm_rip_write(&svm->vcpu, hsave->save.rip);
 	svm->vmcb->save.dr7 = 0;
 	svm->vmcb->save.cpl = 0;
 	svm->vmcb->control.exit_int_info = 0;
@@ -3517,8 +3517,8 @@ static void enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb_gpa,
 
 	svm->vmcb->save.cr2 = svm->vcpu.arch.cr2 = nested_vmcb->save.cr2;
 	kvm_rax_write(&svm->vcpu, nested_vmcb->save.rax);
-	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, nested_vmcb->save.rsp);
-	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, nested_vmcb->save.rip);
+	kvm_rsp_write(&svm->vcpu, nested_vmcb->save.rsp);
+	kvm_rip_write(&svm->vcpu, nested_vmcb->save.rip);
 
 	/* In case we don't even reach vcpu_run, the fields are not updated */
 	svm->vmcb->save.rax = nested_vmcb->save.rax;
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index b46136b099b8..35d92f5ab2de 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -2384,8 +2384,8 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
 	if (!enable_ept)
 		vcpu->arch.walk_mmu->inject_page_fault = vmx_inject_page_fault_nested;
 
-	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->guest_rsp);
-	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->guest_rip);
+	kvm_rsp_write(vcpu, vmcs12->guest_rsp);
+	kvm_rip_write(vcpu, vmcs12->guest_rip);
 	return 0;
 }
 
@@ -3429,8 +3429,8 @@ static void sync_vmcs12(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12)
 	vmcs12->guest_cr0 = vmcs12_guest_cr0(vcpu, vmcs12);
 	vmcs12->guest_cr4 = vmcs12_guest_cr4(vcpu, vmcs12);
 
-	vmcs12->guest_rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
-	vmcs12->guest_rip = kvm_register_read(vcpu, VCPU_REGS_RIP);
+	vmcs12->guest_rsp = kvm_rsp_read(vcpu);
+	vmcs12->guest_rip = kvm_rip_read(vcpu);
 	vmcs12->guest_rflags = vmcs_readl(GUEST_RFLAGS);
 
 	vmcs12->guest_es_selector = vmcs_read16(GUEST_ES_SELECTOR);
@@ -3613,8 +3613,8 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
 		vcpu->arch.efer &= ~(EFER_LMA | EFER_LME);
 	vmx_set_efer(vcpu, vcpu->arch.efer);
 
-	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp);
-	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip);
+	kvm_rsp_write(vcpu, vmcs12->host_rsp);
+	kvm_rip_write(vcpu, vmcs12->host_rip);
 	vmx_set_rflags(vcpu, X86_EFLAGS_FIXED);
 	vmx_set_interrupt_shadow(vcpu, 0);
 
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index b6734c2d882f..200e424afcc3 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -8268,7 +8268,7 @@ static void __get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	regs->rdx = kvm_rdx_read(vcpu);
 	regs->rsi = kvm_rsi_read(vcpu);
 	regs->rdi = kvm_rdi_read(vcpu);
-	regs->rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
+	regs->rsp = kvm_rsp_read(vcpu);
 	regs->rbp = kvm_rbp_read(vcpu);
 #ifdef CONFIG_X86_64
 	regs->r8 = kvm_r8_read(vcpu);
@@ -8304,7 +8304,7 @@ static void __set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
 	kvm_rdx_write(vcpu, regs->rdx);
 	kvm_rsi_write(vcpu, regs->rsi);
 	kvm_rdi_write(vcpu, regs->rdi);
-	kvm_register_write(vcpu, VCPU_REGS_RSP, regs->rsp);
+	kvm_rsp_write(vcpu, regs->rsp);
 	kvm_rbp_write(vcpu, regs->rbp);
 #ifdef CONFIG_X86_64
 	kvm_r8_write(vcpu, regs->r8);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] KVM: VMX: Use accessors for GPRs outside of dedicated caching logic
  2019-04-30 17:36 [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Sean Christopherson
  2019-04-30 17:36 ` [PATCH 1/3] KVM: x86: Omit caching logic for " Sean Christopherson
  2019-04-30 17:36 ` [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs Sean Christopherson
@ 2019-04-30 17:36 ` Sean Christopherson
  2019-04-30 20:03 ` [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Paolo Bonzini
  3 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-04-30 17:36 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář, Joerg Roedel; +Cc: kvm

... now that there is no overhead when using dedicated accessors.

Opportunistically remove a bogus "FIXME" in handle_rdmsr() regarding
the upper 32 bits of RAX and RDX.  Zeroing the upper 32 bits is
architecturally correct as 32-bit writes in 64-bit mode unconditionally
clear the upper 32 bits.

Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
 arch/x86/kvm/vmx/nested.c |  6 +++---
 arch/x86/kvm/vmx/vmx.c    | 12 +++++-------
 2 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index 35d92f5ab2de..449cf878d9df 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -4808,7 +4808,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
 static int nested_vmx_eptp_switching(struct kvm_vcpu *vcpu,
 				     struct vmcs12 *vmcs12)
 {
-	u32 index = vcpu->arch.regs[VCPU_REGS_RCX];
+	u32 index = kvm_rcx_read(vcpu);
 	u64 address;
 	bool accessed_dirty;
 	struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
@@ -4854,7 +4854,7 @@ static int handle_vmfunc(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_vmx *vmx = to_vmx(vcpu);
 	struct vmcs12 *vmcs12;
-	u32 function = vcpu->arch.regs[VCPU_REGS_RAX];
+	u32 function = kvm_rax_read(vcpu);
 
 	/*
 	 * VMFUNC is only supported for nested guests, but we always enable the
@@ -4940,7 +4940,7 @@ static bool nested_vmx_exit_handled_io(struct kvm_vcpu *vcpu,
 static bool nested_vmx_exit_handled_msr(struct kvm_vcpu *vcpu,
 	struct vmcs12 *vmcs12, u32 exit_reason)
 {
-	u32 msr_index = vcpu->arch.regs[VCPU_REGS_RCX];
+	u32 msr_index = kvm_rcx_read(vcpu);
 	gpa_t bitmap;
 
 	if (!nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS))
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 5880c5c7388a..553068867c62 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -4832,7 +4832,7 @@ static int handle_cpuid(struct kvm_vcpu *vcpu)
 
 static int handle_rdmsr(struct kvm_vcpu *vcpu)
 {
-	u32 ecx = vcpu->arch.regs[VCPU_REGS_RCX];
+	u32 ecx = kvm_rcx_read(vcpu);
 	struct msr_data msr_info;
 
 	msr_info.index = ecx;
@@ -4845,18 +4845,16 @@ static int handle_rdmsr(struct kvm_vcpu *vcpu)
 
 	trace_kvm_msr_read(ecx, msr_info.data);
 
-	/* FIXME: handling of bits 32:63 of rax, rdx */
-	vcpu->arch.regs[VCPU_REGS_RAX] = msr_info.data & -1u;
-	vcpu->arch.regs[VCPU_REGS_RDX] = (msr_info.data >> 32) & -1u;
+	kvm_rax_write(vcpu, msr_info.data & -1u);
+	kvm_rdx_write(vcpu, (msr_info.data >> 32) & -1u);
 	return kvm_skip_emulated_instruction(vcpu);
 }
 
 static int handle_wrmsr(struct kvm_vcpu *vcpu)
 {
 	struct msr_data msr;
-	u32 ecx = vcpu->arch.regs[VCPU_REGS_RCX];
-	u64 data = (vcpu->arch.regs[VCPU_REGS_RAX] & -1u)
-		| ((u64)(vcpu->arch.regs[VCPU_REGS_RDX] & -1u) << 32);
+	u32 ecx = kvm_rcx_read(vcpu);
+	u64 data = kvm_read_edx_eax(vcpu);
 
 	msr.data = data;
 	msr.index = ecx;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs
  2019-04-30 17:36 [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Sean Christopherson
                   ` (2 preceding siblings ...)
  2019-04-30 17:36 ` [PATCH 3/3] KVM: VMX: Use accessors for GPRs outside of dedicated caching logic Sean Christopherson
@ 2019-04-30 20:03 ` Paolo Bonzini
  2019-04-30 20:19   ` Sean Christopherson
  3 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2019-04-30 20:03 UTC (permalink / raw)
  To: Sean Christopherson, Radim Krčmář, Joerg Roedel; +Cc: kvm

On 30/04/19 19:36, Sean Christopherson wrote:
> KVM's GPR caching logic is unconditionally emitted for all GPR accesses
> (that go through the accessors), even when the register being accessed
> is fixed and always available.  This bloats KVM due to the instructions
> needed to test and set the available/dirty bitmaps, and to conditionally
> invoke the .cache_reg() callback.  The latter is especially painful when
> compiling with retpolines.
> 
> Eliminate the unnecessary overhead by:
> 
>  - Adding dedicated accessors for every GPR
>  - Omitting the caching logic for GPRs that are always available
>  - Preventing use of the unoptimized versions for fixed accesses
> 
> The last patch is an opportunistic clean up of VMX, which has gradually
> acquired a bad habit of sprinkling in direct access to GPRs.

Another related cleanup is to replace these with the direct accessors:

arch/x86/kvm/vmx/nested.c:	vmcs12->guest_rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
arch/x86/kvm/vmx/nested.c:	vmcs12->guest_rip = kvm_register_read(vcpu, VCPU_REGS_RIP);
arch/x86/kvm/x86.c:	regs->rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp);
arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, hsave->save.rip);
arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, nested_vmcb->save.rsp);
arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, nested_vmcb->save.rip);
arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->guest_rsp);
arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->guest_rip);
arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp);
arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip);
arch/x86/kvm/x86.c:	kvm_register_write(vcpu, VCPU_REGS_RSP, regs->rsp);

I can take care of this.  I have applied patches 1 and 3.  I didn't apply
patch 2 for the reasons I mentioned in my reply, and because I am not sure
if it works properly---it should have flagged the above occurrences,
shouldn't it?

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs
  2019-04-30 20:03 ` [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Paolo Bonzini
@ 2019-04-30 20:19   ` Sean Christopherson
  0 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-04-30 20:19 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Radim Krčmář, Joerg Roedel, kvm

On Tue, Apr 30, 2019 at 10:03:59PM +0200, Paolo Bonzini wrote:
> On 30/04/19 19:36, Sean Christopherson wrote:
> > KVM's GPR caching logic is unconditionally emitted for all GPR accesses
> > (that go through the accessors), even when the register being accessed
> > is fixed and always available.  This bloats KVM due to the instructions
> > needed to test and set the available/dirty bitmaps, and to conditionally
> > invoke the .cache_reg() callback.  The latter is especially painful when
> > compiling with retpolines.
> > 
> > Eliminate the unnecessary overhead by:
> > 
> >  - Adding dedicated accessors for every GPR
> >  - Omitting the caching logic for GPRs that are always available
> >  - Preventing use of the unoptimized versions for fixed accesses
> > 
> > The last patch is an opportunistic clean up of VMX, which has gradually
> > acquired a bad habit of sprinkling in direct access to GPRs.
> 
> Another related cleanup is to replace these with the direct accessors:
> 
> arch/x86/kvm/vmx/nested.c:	vmcs12->guest_rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
> arch/x86/kvm/vmx/nested.c:	vmcs12->guest_rip = kvm_register_read(vcpu, VCPU_REGS_RIP);
> arch/x86/kvm/x86.c:	regs->rsp = kvm_register_read(vcpu, VCPU_REGS_RSP);
> arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp);
> arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, hsave->save.rip);
> arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, nested_vmcb->save.rsp);
> arch/x86/kvm/svm.c:	kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, nested_vmcb->save.rip);
> arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->guest_rsp);
> arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->guest_rip);
> arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RSP, vmcs12->host_rsp);
> arch/x86/kvm/vmx/nested.c:	kvm_register_write(vcpu, VCPU_REGS_RIP, vmcs12->host_rip);
> arch/x86/kvm/x86.c:	kvm_register_write(vcpu, VCPU_REGS_RSP, regs->rsp);
> 
> I can take care of this.  I have applied patches 1 and 3.  I didn't apply
> patch 2 for the reasons I mentioned in my reply, and because I am not sure
> if it works properly---it should have flagged the above occurrences,
> shouldn't it?

I squeezed the cleanup into patch 2.  I should have called that out in the
changelog but didn't for whatever reason.  Probably would have been even
better to do the refactor in a separate patch.  Sorry :(

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs
  2019-04-30 17:36 ` [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs Sean Christopherson
@ 2019-04-30 20:34   ` Paolo Bonzini
  2019-04-30 20:44     ` Sean Christopherson
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2019-04-30 20:34 UTC (permalink / raw)
  To: Sean Christopherson, Radim Krčmář, Joerg Roedel; +Cc: kvm

On 30/04/19 19:36, Sean Christopherson wrote:
> ... to prevent future code from using the unoptimized generic accessors
> when hardcoding access to always-available GPRs.

This cannot be done in general, because builtin_constant_p could be used
through layers of inlining.  For example you could have a function that
takes an enum kvm_reg argument and _its_ caller passes a constant in
there.  Of course we may just say that we don't have such a case now (do
we?) and so it's unlikely to happen in the future as well.

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs
  2019-04-30 20:34   ` Paolo Bonzini
@ 2019-04-30 20:44     ` Sean Christopherson
  0 siblings, 0 replies; 8+ messages in thread
From: Sean Christopherson @ 2019-04-30 20:44 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Radim Krčmář, Joerg Roedel, kvm

On Tue, Apr 30, 2019 at 10:34:52PM +0200, Paolo Bonzini wrote:
> On 30/04/19 19:36, Sean Christopherson wrote:
> > ... to prevent future code from using the unoptimized generic accessors
> > when hardcoding access to always-available GPRs.
> 
> This cannot be done in general, because builtin_constant_p could be used
> through layers of inlining.  For example you could have a function that
> takes an enum kvm_reg argument and _its_ caller passes a constant in
> there.  Of course we may just say that we don't have such a case now (do
> we?) and so it's unlikely to happen in the future as well.

Another potential hiccup, and probably more likely, is that
the register save loops in enter_smm_save_state_{32,64} get unrolled by
the compiler.

My thought was (is?) that the only time KVM should ever use the
generic accessors is when the register is truly unknown, i.e. comes
from emulating an instruction with ModRM (or equivalent).

I thought about manually unrolling enter_smm_save_state_{32,64} so as
to avoid the caching logic, but that seemed like overkill at the time.
I didn't consider the compiler unrolling the loop and exploding.  I'll
see if I can come up with anything clever for the SMM flow and expand
the changelog if I end up with a version that is "guaranteed" to not
run afould of compiler tricks.

P.S. Do you want me to send a patch with the cleanup parts that I unwisely
smushed into this patch?  Or have you already taken care of that?

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-04-30 20:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-30 17:36 [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Sean Christopherson
2019-04-30 17:36 ` [PATCH 1/3] KVM: x86: Omit caching logic for " Sean Christopherson
2019-04-30 17:36 ` [PATCH 2/3] KVM: x86: Prevent use of kvm_register_{read,write}() with known GPRs Sean Christopherson
2019-04-30 20:34   ` Paolo Bonzini
2019-04-30 20:44     ` Sean Christopherson
2019-04-30 17:36 ` [PATCH 3/3] KVM: VMX: Use accessors for GPRs outside of dedicated caching logic Sean Christopherson
2019-04-30 20:03 ` [PATCH 0/3] KVM: x86: Drop "caching" of always-available GPRs Paolo Bonzini
2019-04-30 20:19   ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).