public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Lightweight svm vmload/vmsave (almost)
@ 2010-10-21 10:20 Avi Kivity
  2010-10-21 10:20 ` [PATCH 1/4] KVM: SVM: Move guest register save out of interrupts disabled section Avi Kivity
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Avi Kivity @ 2010-10-21 10:20 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

This patchset moves svm towards a lightweight vmload/vmsave path.  It was
hindered by CVE-2010-3698 which was discovered during its development, and
by the lack of per-cpu IDT in Linux, which makes it more or less useless.
However, even so it's a slight improvement, and merging it will reduce the
work needed when we do have per-cpu IDT.

Avi Kivity (4):
  KVM: SVM: Move guest register save out of interrupts disabled section
  KVM: SVM: Move svm->host_gs_base into a separate structure
  KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit path
  KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their
    callers

 arch/x86/kvm/svm.c |   61 +++++++++++++++++++++++-----------------------------
 1 files changed, 27 insertions(+), 34 deletions(-)

-- 
1.7.3.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/4] KVM: SVM: Move guest register save out of interrupts disabled section
  2010-10-21 10:20 [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Avi Kivity
@ 2010-10-21 10:20 ` Avi Kivity
  2010-10-21 10:20 ` [PATCH 2/4] KVM: SVM: Move svm->host_gs_base into a separate structure Avi Kivity
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Avi Kivity @ 2010-10-21 10:20 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

Saving guest registers is just a memory copy, and does not need to be in the
critical section.  Move outside the critical section to improve latency a
bit.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/svm.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 2e57450..9d703e2 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -3412,11 +3412,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 #endif
 		);
 
-	vcpu->arch.cr2 = svm->vmcb->save.cr2;
-	vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
-	vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
-	vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
-
 	load_host_msrs(vcpu);
 	kvm_load_ldt(ldt_selector);
 	loadsegment(fs, fs_selector);
@@ -3433,6 +3428,11 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	stgi();
 
+	vcpu->arch.cr2 = svm->vmcb->save.cr2;
+	vcpu->arch.regs[VCPU_REGS_RAX] = svm->vmcb->save.rax;
+	vcpu->arch.regs[VCPU_REGS_RSP] = svm->vmcb->save.rsp;
+	vcpu->arch.regs[VCPU_REGS_RIP] = svm->vmcb->save.rip;
+
 	sync_cr8_to_lapic(vcpu);
 
 	svm->next_rip = 0;
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/4] KVM: SVM: Move svm->host_gs_base into a separate structure
  2010-10-21 10:20 [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Avi Kivity
  2010-10-21 10:20 ` [PATCH 1/4] KVM: SVM: Move guest register save out of interrupts disabled section Avi Kivity
@ 2010-10-21 10:20 ` Avi Kivity
  2010-10-21 10:20 ` [PATCH 3/4] KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit path Avi Kivity
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Avi Kivity @ 2010-10-21 10:20 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

More members will join it soon.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/svm.c |    8 +++++---
 1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 9d703e2..451590e 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -124,7 +124,9 @@ struct vcpu_svm {
 	u64 next_rip;
 
 	u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS];
-	u64 host_gs_base;
+	struct {
+		u64 gs_base;
+	} host;
 
 	u32 *msrpm;
 
@@ -1353,14 +1355,14 @@ static void svm_guest_debug(struct kvm_vcpu *vcpu, struct kvm_guest_debug *dbg)
 static void load_host_msrs(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_X86_64
-	wrmsrl(MSR_GS_BASE, to_svm(vcpu)->host_gs_base);
+	wrmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base);
 #endif
 }
 
 static void save_host_msrs(struct kvm_vcpu *vcpu)
 {
 #ifdef CONFIG_X86_64
-	rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host_gs_base);
+	rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base);
 #endif
 }
 
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/4] KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit path
  2010-10-21 10:20 [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Avi Kivity
  2010-10-21 10:20 ` [PATCH 1/4] KVM: SVM: Move guest register save out of interrupts disabled section Avi Kivity
  2010-10-21 10:20 ` [PATCH 2/4] KVM: SVM: Move svm->host_gs_base into a separate structure Avi Kivity
@ 2010-10-21 10:20 ` Avi Kivity
  2010-10-21 10:20 ` [PATCH 4/4] KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their callers Avi Kivity
  2010-10-22 15:41 ` [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Marcelo Tosatti
  4 siblings, 0 replies; 6+ messages in thread
From: Avi Kivity @ 2010-10-21 10:20 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

ldt is never used in the kernel context; same goes for fs (x86_64) and gs
(i386).  So save/restore them in the heavyweight exit path instead
of the lightweight path.

By itself, this doesn't buy us much, but it paves the way for moving vmload
and vmsave to the heavyweight exit path, since they modify the same registers.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/svm.c |   34 +++++++++++++++++++---------------
 1 files changed, 19 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 451590e..ec392d6 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -125,6 +125,9 @@ struct vcpu_svm {
 
 	u64 host_user_msrs[NR_HOST_SAVE_USER_MSRS];
 	struct {
+		u16 fs;
+		u16 gs;
+		u16 ldt;
 		u64 gs_base;
 	} host;
 
@@ -184,6 +187,9 @@ static int nested_svm_vmexit(struct vcpu_svm *svm);
 static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 				      bool has_error_code, u32 error_code);
 
+static void save_host_msrs(struct kvm_vcpu *vcpu);
+static void load_host_msrs(struct kvm_vcpu *vcpu);
+
 static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
 {
 	return container_of(vcpu, struct vcpu_svm, vcpu);
@@ -996,6 +1002,11 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		svm->asid_generation = 0;
 	}
 
+	save_host_msrs(vcpu);
+	savesegment(fs, svm->host.fs);
+	savesegment(gs, svm->host.gs);
+	svm->host.ldt = kvm_read_ldt();
+
 	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
 		rdmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 }
@@ -1006,6 +1017,14 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
 	int i;
 
 	++vcpu->stat.host_state_reload;
+	kvm_load_ldt(svm->host.ldt);
+	loadsegment(fs, svm->host.fs);
+#ifdef CONFIG_X86_64
+	load_gs_index(svm->host.gs);
+	wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
+#else
+	loadsegment(gs, gs_selector);
+#endif
 	for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
 		wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
 }
@@ -3314,9 +3333,6 @@ static void svm_cancel_injection(struct kvm_vcpu *vcpu)
 static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 {
 	struct vcpu_svm *svm = to_svm(vcpu);
-	u16 fs_selector;
-	u16 gs_selector;
-	u16 ldt_selector;
 
 	svm->vmcb->save.rax = vcpu->arch.regs[VCPU_REGS_RAX];
 	svm->vmcb->save.rsp = vcpu->arch.regs[VCPU_REGS_RSP];
@@ -3333,10 +3349,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 
 	sync_lapic_to_cr8(vcpu);
 
-	save_host_msrs(vcpu);
-	savesegment(fs, fs_selector);
-	savesegment(gs, gs_selector);
-	ldt_selector = kvm_read_ldt();
 	svm->vmcb->save.cr2 = vcpu->arch.cr2;
 
 	clgi();
@@ -3415,14 +3427,6 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 		);
 
 	load_host_msrs(vcpu);
-	kvm_load_ldt(ldt_selector);
-	loadsegment(fs, fs_selector);
-#ifdef CONFIG_X86_64
-	load_gs_index(gs_selector);
-	wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
-#else
-	loadsegment(gs, gs_selector);
-#endif
 
 	reload_tss(vcpu);
 
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 4/4] KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their callers
  2010-10-21 10:20 [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Avi Kivity
                   ` (2 preceding siblings ...)
  2010-10-21 10:20 ` [PATCH 3/4] KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit path Avi Kivity
@ 2010-10-21 10:20 ` Avi Kivity
  2010-10-22 15:41 ` [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Marcelo Tosatti
  4 siblings, 0 replies; 6+ messages in thread
From: Avi Kivity @ 2010-10-21 10:20 UTC (permalink / raw)
  To: Marcelo Tosatti, kvm

This abstraction only serves to obfuscate.  Remove.

Signed-off-by: Avi Kivity <avi@redhat.com>
---
 arch/x86/kvm/svm.c |   25 ++++++-------------------
 1 files changed, 6 insertions(+), 19 deletions(-)

diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ec392d6..fad4038 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -187,9 +187,6 @@ static int nested_svm_vmexit(struct vcpu_svm *svm);
 static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr,
 				      bool has_error_code, u32 error_code);
 
-static void save_host_msrs(struct kvm_vcpu *vcpu);
-static void load_host_msrs(struct kvm_vcpu *vcpu);
-
 static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
 {
 	return container_of(vcpu, struct vcpu_svm, vcpu);
@@ -1002,7 +999,9 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
 		svm->asid_generation = 0;
 	}
 
-	save_host_msrs(vcpu);
+#ifdef CONFIG_X86_64
+	rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base);
+#endif
 	savesegment(fs, svm->host.fs);
 	savesegment(gs, svm->host.gs);
 	svm->host.ldt = kvm_read_ldt();
@@ -1371,20 +1370,6 @@ static void svm_guest_debug(struct kvm_vcpu *vcpu, struct kvm_guest_debug *dbg)
 	update_db_intercept(vcpu);
 }
 
-static void load_host_msrs(struct kvm_vcpu *vcpu)
-{
-#ifdef CONFIG_X86_64
-	wrmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base);
-#endif
-}
-
-static void save_host_msrs(struct kvm_vcpu *vcpu)
-{
-#ifdef CONFIG_X86_64
-	rdmsrl(MSR_GS_BASE, to_svm(vcpu)->host.gs_base);
-#endif
-}
-
 static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
 {
 	if (sd->next_asid > sd->max_asid) {
@@ -3426,7 +3411,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
 #endif
 		);
 
-	load_host_msrs(vcpu);
+#ifdef CONFIG_X86_64
+	wrmsrl(MSR_GS_BASE, svm->host.gs_base);
+#endif
 
 	reload_tss(vcpu);
 
-- 
1.7.3.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/4] Lightweight svm vmload/vmsave (almost)
  2010-10-21 10:20 [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Avi Kivity
                   ` (3 preceding siblings ...)
  2010-10-21 10:20 ` [PATCH 4/4] KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their callers Avi Kivity
@ 2010-10-22 15:41 ` Marcelo Tosatti
  4 siblings, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2010-10-22 15:41 UTC (permalink / raw)
  To: Avi Kivity; +Cc: kvm

On Thu, Oct 21, 2010 at 12:20:30PM +0200, Avi Kivity wrote:
> This patchset moves svm towards a lightweight vmload/vmsave path.  It was
> hindered by CVE-2010-3698 which was discovered during its development, and
> by the lack of per-cpu IDT in Linux, which makes it more or less useless.
> However, even so it's a slight improvement, and merging it will reduce the
> work needed when we do have per-cpu IDT.
> 
> Avi Kivity (4):
>   KVM: SVM: Move guest register save out of interrupts disabled section
>   KVM: SVM: Move svm->host_gs_base into a separate structure
>   KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit path
>   KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their
>     callers
> 
>  arch/x86/kvm/svm.c |   61 +++++++++++++++++++++++-----------------------------
>  1 files changed, 27 insertions(+), 34 deletions(-)
> 

Applied, thanks.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2010-10-22 15:42 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-10-21 10:20 [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Avi Kivity
2010-10-21 10:20 ` [PATCH 1/4] KVM: SVM: Move guest register save out of interrupts disabled section Avi Kivity
2010-10-21 10:20 ` [PATCH 2/4] KVM: SVM: Move svm->host_gs_base into a separate structure Avi Kivity
2010-10-21 10:20 ` [PATCH 3/4] KVM: SVM: Move fs/gs/ldt save/restore to heavyweight exit path Avi Kivity
2010-10-21 10:20 ` [PATCH 4/4] KVM: SVM: Fold save_host_msrs() and load_host_msrs() into their callers Avi Kivity
2010-10-22 15:41 ` [PATCH 0/4] Lightweight svm vmload/vmsave (almost) Marcelo Tosatti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox