* guest state leak into host
@ 2007-05-10 9:29 Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016AB9E3-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Dong, Eddie @ 2007-05-10 9:29 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel
Avi:
Following commit mentioned guest state leaking into host, Can u
explain a bit?
In my understanding, as if control goes to vmx_vcpu_run, CPU
preemption is disabled, i.e. no rescheduling will happen (Guest
resheduling will only happen at IOCTL return to Qemu time or vcpu_put at
kvm_vcpu_ioctl_run). In this case, let machine FPU hold guest state
(Linux Kernel itself will not use FPU), and machine MSRs (SYSCALL_MASK,
LSTAR, K6_STAR, CSTAR, GS_BASE) hold for guest MSRs can avoid
save/restore and thus performance gain. But I might make some mistake.
BTW, in Xen similar approach is taken. Light weight VM Exit,
which will not trigger domain switch is handled by hypervisor, only
save/restore few MSRs. The heavyweight VM Exit, which trigger domain
switch (similar with return to Qemu in KVM), will do full set of MSR
save/restore.
thx,eddie
commit bc8dcc2107de0ba8f25fc910c4559ebe3df33045
Author: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
Date: Wed May 2 16:54:03 2007 +0300
KVM: Fix potential guest state leak into host
The lightweight vmexit path avoids saving and reloading certain host
state. However in certain cases lightweight vmexit handling can
schedule()
which requires reloading the host state.
So we store the host state in the vcpu structure, and reloaded it if
we
relinquish the vcpu.
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: guest state leak into host
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016AB9E3-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-05-13 10:51 ` Avi Kivity
[not found] ` <4646EDB3.6030602-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Avi Kivity @ 2007-05-13 10:51 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel
Dong, Eddie wrote:
> Avi:
> Following commit mentioned guest state leaking into host, Can u
> explain a bit?
> In my understanding, as if control goes to vmx_vcpu_run, CPU
> preemption is disabled, i.e. no rescheduling will happen (Guest
> resheduling will only happen at IOCTL return to Qemu time or vcpu_put at
> kvm_vcpu_ioctl_run). In this case, let machine FPU hold guest state
> (Linux Kernel itself will not use FPU), and machine MSRs (SYSCALL_MASK,
> LSTAR, K6_STAR, CSTAR, GS_BASE) hold for guest MSRs can avoid
> save/restore and thus performance gain. But I might make some mistake.
>
Some exit handlers (even the #PF handler) can sleep sometimes. They
call kvm_arch_ops->vcpu_put(), do some sleepy thing, then call
kvm_arch_ops->vcpu_load(). The changes in the commit make sure that if
vcpu_put() is called, the lightweight exit is converted to a heavyweight
exit. Since such sleeps are rare, this is not expected to impact
performance.
See for example mmu_topup_memory_caches().
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH] lighweight VM Exit (was:RE: guest state leak into host)
[not found] ` <4646EDB3.6030602-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-05-14 14:57 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FBD91-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Dong, Eddie @ 2007-05-14 14:57 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel
[-- Attachment #1: Type: text/plain, Size: 7368 bytes --]
Avi Kivity wrote:
>
> Some exit handlers (even the #PF handler) can sleep sometimes. They
> call kvm_arch_ops->vcpu_put(), do some sleepy thing, then call
> kvm_arch_ops->vcpu_load(). The changes in the commit make
> sure that if
> vcpu_put() is called, the lightweight exit is converted to a
> heavyweight exit. Since such sleeps are rare, this is not expected
> to impact performance.
>
> See for example mmu_topup_memory_caches().
>
>
OK, how about this patch which further reduce the light weight VM Exit
MSR save/restore?
thx,eddie
Signed-off-by: Yaozu(Eddie) Dong Eddie.Dong-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org
against ca76d209b88c344fc6a8eac17057c0088a3d6940.
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..e61a7e6 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,7 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int smsrs_bitmap;
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
@@ -513,6 +514,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32
msr, u64 data);
void fx_init(struct kvm_vcpu *vcpu);
+void load_msrs_select(struct vmx_msr_entry *e, int bitmap);
+void save_msrs_select(struct vmx_msr_entry *e, int bitmap);
void load_msrs(struct vmx_msr_entry *e, int n);
void save_msrs(struct vmx_msr_entry *e, int n);
void kvm_resched(struct kvm_vcpu *vcpu);
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 1288cff..ef96fae 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -1596,6 +1596,30 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
+void load_msrs_select(struct vmx_msr_entry *e, int bitmap)
+{
+ unsigned long nr;
+
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ clear_bit(nr,&bitmap);
+ wrmsrl(e[nr].index, e[nr].data);
+ }
+}
+EXPORT_SYMBOL_GPL(load_msrs_select);
+
+void save_msrs_select(struct vmx_msr_entry *e, int bitmap)
+{
+ unsigned long nr;
+
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ clear_bit(nr,&bitmap);
+ rdmsrl(e[nr].index, e[nr].data);
+ }
+}
+EXPORT_SYMBOL_GPL(save_msrs_select);
+
void load_msrs(struct vmx_msr_entry *e, int n)
{
int i;
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..67d076c 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -86,15 +86,6 @@ static const u32 vmx_msr_index[] = {
#ifdef CONFIG_X86_64
static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
#endif
static inline int is_page_fault(u32 intr_info)
@@ -117,13 +108,23 @@ static inline int is_external_interrupt(u32
intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32
msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32
msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -307,9 +308,9 @@ static void vmx_save_host_state(struct kvm_vcpu
*vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base,
1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
}
#endif
+ load_msrs_select(vcpu->guest_msrs, vcpu->smsrs_bitmap);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +337,8 @@ static void vmx_load_host_state(struct kvm_vcpu
*vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs_select(vcpu->guest_msrs, vcpu->smsrs_bitmap);
+ load_msrs_select(vcpu->host_msrs, vcpu->smsrs_bitmap);
}
/*
@@ -469,35 +466,51 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu,
unsigned error_code)
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
-
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
+ int index,save_msrs;
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer &
EFER_SCE))
- ++nr_good_msrs;
+ vcpu->smsrs_bitmap = 0;
+ if (is_long_mode(vcpu)) {
+ if ((index=__find_msr_index(vcpu, MSR_SYSCALL_MASK)) >=
0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_LSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_CSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_KERNEL_GS_BASE))
>= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and
only
+ * if efer.sce is enabled.
+ */
+ if ((index=__find_msr_index(vcpu, MSR_K6_STAR)) >= 0
+#ifdef X86_64
+ && (vcpu->shadow_efer & EFER_SCE)
#endif
+ ) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
}
+ if ((index = __find_msr_index(vcpu, MSR_EFER)) >= 0) {
+ save_msrs = 1;
+ }
+ else {
+ save_msrs = 0;
+ index = 0;
+ }
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2
*/
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2
*/
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2
*/
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_msrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_msrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_msrs);
}
/*
@@ -594,14 +607,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +624,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+ load_msrs_select(vcpu->guest_msrs,
+ vcpu->smsrs_bitmap);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
[-- Attachment #2: msr30.patch --]
[-- Type: application/octet-stream, Size: 6412 bytes --]
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..e61a7e6 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,7 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int smsrs_bitmap;
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
@@ -513,6 +514,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data);
void fx_init(struct kvm_vcpu *vcpu);
+void load_msrs_select(struct vmx_msr_entry *e, int bitmap);
+void save_msrs_select(struct vmx_msr_entry *e, int bitmap);
void load_msrs(struct vmx_msr_entry *e, int n);
void save_msrs(struct vmx_msr_entry *e, int n);
void kvm_resched(struct kvm_vcpu *vcpu);
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 1288cff..ef96fae 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -1596,6 +1596,30 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
+void load_msrs_select(struct vmx_msr_entry *e, int bitmap)
+{
+ unsigned long nr;
+
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ clear_bit(nr,&bitmap);
+ wrmsrl(e[nr].index, e[nr].data);
+ }
+}
+EXPORT_SYMBOL_GPL(load_msrs_select);
+
+void save_msrs_select(struct vmx_msr_entry *e, int bitmap)
+{
+ unsigned long nr;
+
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ clear_bit(nr,&bitmap);
+ rdmsrl(e[nr].index, e[nr].data);
+ }
+}
+EXPORT_SYMBOL_GPL(save_msrs_select);
+
void load_msrs(struct vmx_msr_entry *e, int n)
{
int i;
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..67d076c 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -86,15 +86,6 @@ static const u32 vmx_msr_index[] = {
#ifdef CONFIG_X86_64
static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
#endif
static inline int is_page_fault(u32 intr_info)
@@ -117,13 +108,23 @@ static inline int is_external_interrupt(u32 intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -307,9 +308,9 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base, 1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
}
#endif
+ load_msrs_select(vcpu->guest_msrs, vcpu->smsrs_bitmap);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +337,8 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs_select(vcpu->guest_msrs, vcpu->smsrs_bitmap);
+ load_msrs_select(vcpu->host_msrs, vcpu->smsrs_bitmap);
}
/*
@@ -469,35 +466,51 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
-
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
+ int index,save_msrs;
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer & EFER_SCE))
- ++nr_good_msrs;
+ vcpu->smsrs_bitmap = 0;
+ if (is_long_mode(vcpu)) {
+ if ((index=__find_msr_index(vcpu, MSR_SYSCALL_MASK)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_LSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_CSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_KERNEL_GS_BASE)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and only
+ * if efer.sce is enabled.
+ */
+ if ((index=__find_msr_index(vcpu, MSR_K6_STAR)) >= 0
+#ifdef X86_64
+ && (vcpu->shadow_efer & EFER_SCE)
#endif
+ ) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
}
+ if ((index = __find_msr_index(vcpu, MSR_EFER)) >= 0) {
+ save_msrs = 1;
+ }
+ else {
+ save_msrs = 0;
+ index = 0;
+ }
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_msrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_msrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_msrs);
}
/*
@@ -594,14 +607,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +624,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+ load_msrs_select(vcpu->guest_msrs,
+ vcpu->smsrs_bitmap);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
[-- Attachment #3: Type: text/plain, Size: 286 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
[-- Attachment #4: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FBD91-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-05-14 15:14 ` Avi Kivity
[not found] ` <46487CD7.8070408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Avi Kivity @ 2007-05-14 15:14 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel
Dong, Eddie wrote:
> OK, how about this patch which further reduce the light weight VM Exit
> MSR save/restore?
>
>
> diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
> index 1288cff..ef96fae 100644
> --- a/drivers/kvm/kvm_main.c
> +++ b/drivers/kvm/kvm_main.c
> @@ -1596,6 +1596,30 @@ void kvm_resched(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_resched);
>
> +void load_msrs_select(struct vmx_msr_entry *e, int bitmap)
> +{
> + unsigned long nr;
> +
> + while (bitmap) {
> + nr = __ffs(bitmap);
> + clear_bit(nr,&bitmap);
> + wrmsrl(e[nr].index, e[nr].data);
> + }
> +}
> +EXPORT_SYMBOL_GPL(load_msrs_select);
> +
> +void save_msrs_select(struct vmx_msr_entry *e, int bitmap)
> +{
> + unsigned long nr;
> +
> + while (bitmap) {
> + nr = __ffs(bitmap);
> + clear_bit(nr,&bitmap);
> + rdmsrl(e[nr].index, e[nr].data);
> + }
> +}
> +EXPORT_SYMBOL_GPL(save_msrs_select);
> +
>
__clear_bit() is faster here (no LOCK prefix). But maybe we can avoid
the entire thing by having a vcpu->active_msr_list (array of struct
vmx_msr_entry) which is re-constructed every time the mode changes
(instead of constructing the bitmap). vmx_get_msr() can first look at
the active msr list and then at the regular msr list.
>
> /*
> @@ -469,35 +466,51 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu,
> unsigned error_code)
> */
> static void setup_msrs(struct kvm_vcpu *vcpu)
> {
> - int nr_skip, nr_good_msrs;
> -
> - if (is_long_mode(vcpu))
> - nr_skip = NR_BAD_MSRS;
> - else
> - nr_skip = NR_64BIT_MSRS;
> - nr_good_msrs = vcpu->nmsrs - nr_skip;
> + int index,save_msrs;
>
space after comma
>
> - /*
> - * MSR_K6_STAR is only needed on long mode guests, and only
> - * if efer.sce is enabled.
> - */
> - if (find_msr_entry(vcpu, MSR_K6_STAR)) {
> - --nr_good_msrs;
> -#ifdef CONFIG_X86_64
> - if (is_long_mode(vcpu) && (vcpu->shadow_efer &
> EFER_SCE))
> - ++nr_good_msrs;
> + vcpu->smsrs_bitmap = 0;
> + if (is_long_mode(vcpu)) {
> + if ((index=__find_msr_index(vcpu, MSR_SYSCALL_MASK)) >=
> 0) {
> + set_bit(index, &vcpu->smsrs_bitmap);
> + }
>
Assignment outside if (), spaces around =, please. Single statements
without {}.
Also __set_bit() applies here.
> + /*
> + * MSR_K6_STAR is only needed on long mode guests, and
> only
> + * if efer.sce is enabled.
> + */
> + if ((index=__find_msr_index(vcpu, MSR_K6_STAR)) >= 0
> +#ifdef X86_64
> + && (vcpu->shadow_efer & EFER_SCE)
> #endif
> + ) {
> + set_bit(index, &vcpu->smsrs_bitmap);
>
You're saving MSR_K6_STAR unnecessarily on i386. Since we don't export
EFER on i386 (one day we should...), the guest can't use syscall.
> + }
> }
>
> + if ((index = __find_msr_index(vcpu, MSR_EFER)) >= 0) {
> + save_msrs = 1;
> + }
> + else {
> + save_msrs = 0;
> + index = 0;
> + }
>
Why not use hardware autoloading? Is it slower than software?
Otherwise looks good. Did you measure performance improvement? I
usually use user/test/vmexit.c from kvm-userspace.git.
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <46487CD7.8070408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-05-14 15:34 ` Christoph Hellwig
2007-05-15 6:10 ` Dong, Eddie
1 sibling, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2007-05-14 15:34 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel
On Mon, May 14, 2007 at 06:14:31PM +0300, Avi Kivity wrote:
> Dong, Eddie wrote:
> > OK, how about this patch which further reduce the light weight VM Exit
> > MSR save/restore?
> >
> >
> > diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
> > index 1288cff..ef96fae 100644
> > --- a/drivers/kvm/kvm_main.c
> > +++ b/drivers/kvm/kvm_main.c
> > @@ -1596,6 +1596,30 @@ void kvm_resched(struct kvm_vcpu *vcpu)
> > }
> > EXPORT_SYMBOL_GPL(kvm_resched);
> >
> > +void load_msrs_select(struct vmx_msr_entry *e, int bitmap)
> > +{
> > + unsigned long nr;
> > +
> > + while (bitmap) {
> > + nr = __ffs(bitmap);
> > + clear_bit(nr,&bitmap);
> > + wrmsrl(e[nr].index, e[nr].data);
> > + }
> > +}
> > +EXPORT_SYMBOL_GPL(load_msrs_select);
Exported symbols should have names with a meaningfull prefix and
a kerneldoc comment describing them.
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <46487CD7.8070408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-05-14 15:34 ` Christoph Hellwig
@ 2007-05-15 6:10 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC1F2-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
1 sibling, 1 reply; 12+ messages in thread
From: Dong, Eddie @ 2007-05-15 6:10 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel
[-- Attachment #1: Type: text/plain, Size: 7508 bytes --]
Avi Kivity wrote:
> Why not use hardware autoloading? Is it slower than software?
I believe HW is faster than SW, but the problem is that this kind of
save/restore is
only needed for heavy weight VM Exit in KVM. While HW doesn't provide an
easy
way to bypass these MSR save/restore for light weight VM Exit, we have
to do
that in SW.
>
> Otherwise looks good. Did you measure performance improvement? I
> usually use user/test/vmexit.c from kvm-userspace.git.
>
Yes, I tested RHEL5 64 bits guest, in my old Pentium 4 platform, I get
4.9%
performance increasement using Kernel Builder as workload. In my 4 core
Clovertown platform (Core 2 Duo), I get 5.4% performance increasement.
32 bits guest test didn't show regression either.
Further improvement can be made base on this patch such as MSR_EFER
virtualization.
thx,eddie
A slight revise per Christoph's comments.
Signed-off-by: Yaozu(Eddie) Dong Eddie.Dong-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org
against ca76d209b88c344fc6a8eac17057c0088a3d6940.
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..08dd73f 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,7 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int smsrs_bitmap;
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 1288cff..44d8bc4 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -1596,21 +1596,27 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void load_msrs(struct vmx_msr_entry *e, int n)
+void load_msrs(struct vmx_msr_entry *e, int bitmap)
{
- int i;
+ unsigned long nr;
- for (i = 0; i < n; ++i)
- wrmsrl(e[i].index, e[i].data);
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ wrmsrl(e[nr].index, e[nr].data);
+ __clear_bit(nr,&bitmap);
+ }
}
EXPORT_SYMBOL_GPL(load_msrs);
-void save_msrs(struct vmx_msr_entry *e, int n)
+void save_msrs(struct vmx_msr_entry *e, int bitmap)
{
- int i;
+ unsigned long nr;
- for (i = 0; i < n; ++i)
- rdmsrl(e[i].index, e[i].data);
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ rdmsrl(e[nr].index, e[nr].data);
+ __clear_bit(nr,&bitmap);
+ }
}
EXPORT_SYMBOL_GPL(save_msrs);
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..0c69fe4 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -86,15 +86,6 @@ static const u32 vmx_msr_index[] = {
#ifdef CONFIG_X86_64
static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
#endif
static inline int is_page_fault(u32 intr_info)
@@ -117,13 +108,23 @@ static inline int is_external_interrupt(u32
intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32
msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32
msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -306,10 +307,10 @@ static void vmx_save_host_state(struct kvm_vcpu
*vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base,
1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
+ save_msrs(vcpu->host_msrs, 1 <<
msr_offset_kernel_gs_base);
}
#endif
+ load_msrs(vcpu->guest_msrs, vcpu->smsrs_bitmap);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +337,8 @@ static void vmx_load_host_state(struct kvm_vcpu
*vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs(vcpu->guest_msrs, vcpu->smsrs_bitmap);
+ load_msrs(vcpu->host_msrs, vcpu->smsrs_bitmap);
}
/*
@@ -469,35 +466,51 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu,
unsigned error_code)
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
-
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
+ int index,save_msrs;
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer &
EFER_SCE))
- ++nr_good_msrs;
+ vcpu->smsrs_bitmap = 0;
+ if (is_long_mode(vcpu)) {
+ if ((index=__find_msr_index(vcpu, MSR_SYSCALL_MASK)) >=
0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_LSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_CSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_KERNEL_GS_BASE))
>= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and
only
+ * if efer.sce is enabled.
+ */
+ if ((index=__find_msr_index(vcpu, MSR_K6_STAR)) >= 0
+#ifdef X86_64
+ && (vcpu->shadow_efer & EFER_SCE)
#endif
+ ) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
}
+ if ((index = __find_msr_index(vcpu, MSR_EFER)) >= 0) {
+ save_msrs = 1;
+ }
+ else {
+ save_msrs = 0;
+ index = 0;
+ }
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2
*/
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2
*/
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2
*/
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_msrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_msrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_msrs);
}
/*
@@ -594,14 +607,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +624,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+ load_msrs(vcpu->guest_msrs,
+ vcpu->smsrs_bitmap);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
[-- Attachment #2: msr31.patch --]
[-- Type: application/octet-stream, Size: 6212 bytes --]
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..08dd73f 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,7 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int smsrs_bitmap;
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 1288cff..44d8bc4 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -1596,21 +1596,27 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void load_msrs(struct vmx_msr_entry *e, int n)
+void load_msrs(struct vmx_msr_entry *e, int bitmap)
{
- int i;
+ unsigned long nr;
- for (i = 0; i < n; ++i)
- wrmsrl(e[i].index, e[i].data);
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ wrmsrl(e[nr].index, e[nr].data);
+ __clear_bit(nr,&bitmap);
+ }
}
EXPORT_SYMBOL_GPL(load_msrs);
-void save_msrs(struct vmx_msr_entry *e, int n)
+void save_msrs(struct vmx_msr_entry *e, int bitmap)
{
- int i;
+ unsigned long nr;
- for (i = 0; i < n; ++i)
- rdmsrl(e[i].index, e[i].data);
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ rdmsrl(e[nr].index, e[nr].data);
+ __clear_bit(nr,&bitmap);
+ }
}
EXPORT_SYMBOL_GPL(save_msrs);
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..0c69fe4 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -86,15 +86,6 @@ static const u32 vmx_msr_index[] = {
#ifdef CONFIG_X86_64
static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
#endif
static inline int is_page_fault(u32 intr_info)
@@ -117,13 +108,23 @@ static inline int is_external_interrupt(u32 intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -306,10 +307,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base, 1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
+ save_msrs(vcpu->host_msrs, 1 << msr_offset_kernel_gs_base);
}
#endif
+ load_msrs(vcpu->guest_msrs, vcpu->smsrs_bitmap);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +337,8 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs(vcpu->guest_msrs, vcpu->smsrs_bitmap);
+ load_msrs(vcpu->host_msrs, vcpu->smsrs_bitmap);
}
/*
@@ -469,35 +466,51 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
-
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
+ int index,save_msrs;
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer & EFER_SCE))
- ++nr_good_msrs;
+ vcpu->smsrs_bitmap = 0;
+ if (is_long_mode(vcpu)) {
+ if ((index=__find_msr_index(vcpu, MSR_SYSCALL_MASK)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_LSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_CSTAR)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ if ((index=__find_msr_index(vcpu, MSR_KERNEL_GS_BASE)) >= 0) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and only
+ * if efer.sce is enabled.
+ */
+ if ((index=__find_msr_index(vcpu, MSR_K6_STAR)) >= 0
+#ifdef X86_64
+ && (vcpu->shadow_efer & EFER_SCE)
#endif
+ ) {
+ set_bit(index, &vcpu->smsrs_bitmap);
+ }
}
+ if ((index = __find_msr_index(vcpu, MSR_EFER)) >= 0) {
+ save_msrs = 1;
+ }
+ else {
+ save_msrs = 0;
+ index = 0;
+ }
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_msrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_msrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_msrs);
}
/*
@@ -594,14 +607,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +624,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+ load_msrs(vcpu->guest_msrs,
+ vcpu->smsrs_bitmap);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
[-- Attachment #3: Type: text/plain, Size: 286 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
[-- Attachment #4: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC1F2-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-05-15 7:39 ` Avi Kivity
[not found] ` <464963A3.7090505-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Avi Kivity @ 2007-05-15 7:39 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel
Dong, Eddie wrote:
> Avi Kivity wrote:
>
>> Why not use hardware autoloading? Is it slower than software?
>>
>
> I believe HW is faster than SW, but the problem is that this kind of
> save/restore is
> only needed for heavy weight VM Exit in KVM. While HW doesn't provide an
> easy
> way to bypass these MSR save/restore for light weight VM Exit, we have
> to do
> that in SW.
>
k.
>
>> Otherwise looks good. Did you measure performance improvement? I
>> usually use user/test/vmexit.c from kvm-userspace.git.
>>
>>
>
> Yes, I tested RHEL5 64 bits guest, in my old Pentium 4 platform, I get
> 4.9%
> performance increasement using Kernel Builder as workload. In my 4 core
> Clovertown platform (Core 2 Duo), I get 5.4% performance increasement.
> 32 bits guest test didn't show regression either.
> Further improvement can be made base on this patch such as MSR_EFER
> virtualization.
>
> thx,eddie
>
> A slight revise per Christoph's comments.
>
You missed my comments regarding coding style.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <464963A3.7090505-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-05-15 10:12 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC3CD-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Dong, Eddie @ 2007-05-15 10:12 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel
[-- Attachment #1: Type: text/plain, Size: 453 bytes --]
Avi Kivity wrote:
>
> You missed my comments regarding coding style.
>
Oo, I read the original mail too quickly and goes to bottom directly.
Here is the update.
BTW, vcpu->active_msr_list is not in this patch. I am working on another
patch
for MSR_EFER acceleration now which give me another 5-10% performance
gain with KB. I can come back for the active_msr_list proposal some days
later
if you think it is needed eventually.
thx,eddie
[-- Attachment #2: msr32.patch --]
[-- Type: application/octet-stream, Size: 6232 bytes --]
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..08dd73f 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,7 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int smsrs_bitmap;
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
diff --git a/drivers/kvm/kvm_main.c b/drivers/kvm/kvm_main.c
index 1288cff..44d8bc4 100644
--- a/drivers/kvm/kvm_main.c
+++ b/drivers/kvm/kvm_main.c
@@ -1596,21 +1596,27 @@ void kvm_resched(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_resched);
-void load_msrs(struct vmx_msr_entry *e, int n)
+void load_msrs(struct vmx_msr_entry *e, int bitmap)
{
- int i;
+ unsigned long nr;
- for (i = 0; i < n; ++i)
- wrmsrl(e[i].index, e[i].data);
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ wrmsrl(e[nr].index, e[nr].data);
+ __clear_bit(nr,&bitmap);
+ }
}
EXPORT_SYMBOL_GPL(load_msrs);
-void save_msrs(struct vmx_msr_entry *e, int n)
+void save_msrs(struct vmx_msr_entry *e, int bitmap)
{
- int i;
+ unsigned long nr;
- for (i = 0; i < n; ++i)
- rdmsrl(e[i].index, e[i].data);
+ while (bitmap) {
+ nr = __ffs(bitmap);
+ rdmsrl(e[nr].index, e[nr].data);
+ __clear_bit(nr,&bitmap);
+ }
}
EXPORT_SYMBOL_GPL(save_msrs);
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..ca2df5d 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -86,15 +86,6 @@ static const u32 vmx_msr_index[] = {
#ifdef CONFIG_X86_64
static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
#endif
static inline int is_page_fault(u32 intr_info)
@@ -117,13 +108,23 @@ static inline int is_external_interrupt(u32 intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -306,10 +307,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base, 1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
+ save_msrs(vcpu->host_msrs, 1 << msr_offset_kernel_gs_base);
}
#endif
+ load_msrs(vcpu->guest_msrs, vcpu->smsrs_bitmap);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +337,8 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs(vcpu->guest_msrs, vcpu->smsrs_bitmap);
+ load_msrs(vcpu->host_msrs, vcpu->smsrs_bitmap);
}
/*
@@ -469,35 +466,49 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
+ int index, save_msrs;
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
-
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer & EFER_SCE))
- ++nr_good_msrs;
+ vcpu->smsrs_bitmap = 0;
+ if (is_long_mode(vcpu)) {
+ index = __find_msr_index(vcpu, MSR_SYSCALL_MASK);
+ if (index >= 0)
+ __set_bit(index, &vcpu->smsrs_bitmap);
+ index = __find_msr_index(vcpu, MSR_LSTAR);
+ if (index >= 0)
+ __set_bit(index, &vcpu->smsrs_bitmap);
+ index = __find_msr_index(vcpu, MSR_CSTAR);
+ if (index >= 0)
+ __set_bit(index, &vcpu->smsrs_bitmap);
+ index = __find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
+ if (index >= 0)
+ __set_bit(index, &vcpu->smsrs_bitmap);
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and only
+ * if efer.sce is enabled.
+ */
+#ifdef X86_64
+ index = __find_msr_index(vcpu, MSR_K6_STAR);
+ if ((index >= 0) && (vcpu->shadow_efer & EFER_SCE))
+ __set_bit(index, &vcpu->smsrs_bitmap);
#endif
}
+ index = __find_msr_index(vcpu, MSR_EFER);
+ if (index >= 0)
+ save_msrs = 1;
+ else {
+ save_msrs = 0;
+ index = 0;
+ }
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_msrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_msrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_msrs);
}
/*
@@ -594,14 +605,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +622,9 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+ load_msrs(vcpu->guest_msrs,
+ vcpu->smsrs_bitmap);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
[-- Attachment #3: Type: text/plain, Size: 286 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
[-- Attachment #4: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC3CD-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-05-15 13:48 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC419-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Dong, Eddie @ 2007-05-15 13:48 UTC (permalink / raw)
To: Dong, Eddie, Avi Kivity; +Cc: kvm-devel
[-- Attachment #1: Type: text/plain, Size: 6962 bytes --]
Alternative to swaping host/guest MSR entry array to avoid MSR bitmap.
More suggestions?
thx,eddie
Signed-off-by: Yaozu(Eddie) Dong Eddie.Dong-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org
against ca76d209b88c344fc6a8eac17057c0088a3d6940.
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..5f056d9 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,10 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int save_nmsrs;
+#ifdef CONFIG_X86_64
+ int msr_offset_kernel_gs_base;
+#endif
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..3af274d 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -84,19 +84,6 @@ static const u32 vmx_msr_index[] = {
};
#define NR_VMX_MSR ARRAY_SIZE(vmx_msr_index)
-#ifdef CONFIG_X86_64
-static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
-#endif
-
static inline int is_page_fault(u32 intr_info)
{
return (intr_info & (INTR_INFO_INTR_TYPE_MASK |
INTR_INFO_VECTOR_MASK |
@@ -117,13 +104,23 @@ static inline int is_external_interrupt(u32
intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32
msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32
msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -306,10 +303,10 @@ static void vmx_save_host_state(struct kvm_vcpu
*vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base,
1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
+ save_msrs(vcpu->host_msrs +
vcpu->msr_offset_kernel_gs_base, 1);
}
#endif
+ load_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +333,8 @@ static void vmx_load_host_state(struct kvm_vcpu
*vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
+ load_msrs(vcpu->host_msrs, vcpu->save_nmsrs);
}
/*
@@ -463,41 +456,74 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu,
unsigned error_code)
}
/*
+ * Swap MSR entry in host/guest MSR entry array.
+ */
+void move_msr_up(struct kvm_vcpu *vcpu, int from, int to)
+{
+ struct vmx_msr_entry tmp;
+ tmp = vcpu->guest_msrs[to];
+ vcpu->guest_msrs[to] = vcpu->guest_msrs[from];
+ vcpu->guest_msrs[from] = tmp;
+ tmp = vcpu->host_msrs[to];
+ vcpu->host_msrs[to] = vcpu->host_msrs[from];
+ vcpu->host_msrs[from] = tmp;
+}
+
+/*
* Set up the vmcs to automatically save and restore system
* msrs. Don't touch the 64-bit msrs if the guest is in legacy
* mode, as fiddling with msrs is very expensive.
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
+ int index, save_nmsrs;
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
+ save_nmsrs = 0;
+ if (is_long_mode(vcpu)) {
+ index = __find_msr_index(vcpu, MSR_SYSCALL_MASK);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ index = __find_msr_index(vcpu, MSR_LSTAR);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ index = __find_msr_index(vcpu, MSR_CSTAR);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ index = __find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and
only
+ * if efer.sce is enabled.
+ */
+#ifdef X86_64
+ index = __find_msr_index(vcpu, MSR_K6_STAR);
+ if ((index >= 0) && (vcpu->shadow_efer & EFER_SCE))
+ move_msr_up(vcpu, index, save_nmsrs++);
+#endif
+ }
+ vcpu->save_nmsrs = save_nmsrs;
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer &
EFER_SCE))
- ++nr_good_msrs;
+ vcpu->msr_offset_kernel_gs_base =
+ __find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
#endif
+ index = __find_msr_index(vcpu, MSR_EFER);
+ if (index >= 0)
+ save_nmsrs = 1;
+ else {
+ save_nmsrs = 0;
+ index = 0;
}
-
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2
*/
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2
*/
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2
*/
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_nmsrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_nmsrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_nmsrs);
}
/*
@@ -594,14 +620,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +637,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+
load_msrs(vcpu->guest_msrs,vcpu->save_nmsrs);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
@@ -1330,10 +1350,6 @@ static int vmx_vcpu_setup(struct kvm_vcpu *vcpu)
vcpu->host_msrs[j].reserved = 0;
vcpu->host_msrs[j].data = data;
vcpu->guest_msrs[j] = vcpu->host_msrs[j];
-#ifdef CONFIG_X86_64
- if (index == MSR_KERNEL_GS_BASE)
- msr_offset_kernel_gs_base = j;
-#endif
++vcpu->nmsrs;
}
[-- Attachment #2: msr33.patch --]
[-- Type: application/octet-stream, Size: 6475 bytes --]
diff --git a/drivers/kvm/kvm.h b/drivers/kvm/kvm.h
index 1bbafba..5f056d9 100644
--- a/drivers/kvm/kvm.h
+++ b/drivers/kvm/kvm.h
@@ -287,6 +287,10 @@ struct kvm_vcpu {
u64 apic_base;
u64 ia32_misc_enable_msr;
int nmsrs;
+ int save_nmsrs;
+#ifdef CONFIG_X86_64
+ int msr_offset_kernel_gs_base;
+#endif
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
diff --git a/drivers/kvm/vmx.c b/drivers/kvm/vmx.c
index 804a623..3af274d 100644
--- a/drivers/kvm/vmx.c
+++ b/drivers/kvm/vmx.c
@@ -84,19 +84,6 @@ static const u32 vmx_msr_index[] = {
};
#define NR_VMX_MSR ARRAY_SIZE(vmx_msr_index)
-#ifdef CONFIG_X86_64
-static unsigned msr_offset_kernel_gs_base;
-#define NR_64BIT_MSRS 4
-/*
- * avoid save/load MSR_SYSCALL_MASK and MSR_LSTAR by std vt
- * mechanism (cpu bug AA24)
- */
-#define NR_BAD_MSRS 2
-#else
-#define NR_64BIT_MSRS 0
-#define NR_BAD_MSRS 0
-#endif
-
static inline int is_page_fault(u32 intr_info)
{
return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VECTOR_MASK |
@@ -117,13 +104,23 @@ static inline int is_external_interrupt(u32 intr_info)
== (INTR_TYPE_EXT_INTR | INTR_INFO_VALID_MASK);
}
-static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+static int __find_msr_index(struct kvm_vcpu *vcpu, u32 msr)
{
int i;
for (i = 0; i < vcpu->nmsrs; ++i)
if (vcpu->guest_msrs[i].index == msr)
- return &vcpu->guest_msrs[i];
+ return i;
+ return -1;
+}
+
+static struct vmx_msr_entry *find_msr_entry(struct kvm_vcpu *vcpu, u32 msr)
+{
+ int i;
+
+ i = __find_msr_index(vcpu, msr);
+ if (i >= 0)
+ return &vcpu->guest_msrs[i];
return NULL;
}
@@ -306,10 +303,10 @@ static void vmx_save_host_state(struct kvm_vcpu *vcpu)
#ifdef CONFIG_X86_64
if (is_long_mode(vcpu)) {
- save_msrs(vcpu->host_msrs + msr_offset_kernel_gs_base, 1);
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
+ save_msrs(vcpu->host_msrs + vcpu->msr_offset_kernel_gs_base, 1);
}
#endif
+ load_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
}
static void vmx_load_host_state(struct kvm_vcpu *vcpu)
@@ -336,12 +333,8 @@ static void vmx_load_host_state(struct kvm_vcpu *vcpu)
reload_tss();
}
-#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu)) {
- save_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- load_msrs(vcpu->host_msrs, NR_BAD_MSRS);
- }
-#endif
+ save_msrs(vcpu->guest_msrs, vcpu->save_nmsrs);
+ load_msrs(vcpu->host_msrs, vcpu->save_nmsrs);
}
/*
@@ -463,41 +456,74 @@ static void vmx_inject_gp(struct kvm_vcpu *vcpu, unsigned error_code)
}
/*
+ * Swap MSR entry in host/guest MSR entry array.
+ */
+void move_msr_up(struct kvm_vcpu *vcpu, int from, int to)
+{
+ struct vmx_msr_entry tmp;
+ tmp = vcpu->guest_msrs[to];
+ vcpu->guest_msrs[to] = vcpu->guest_msrs[from];
+ vcpu->guest_msrs[from] = tmp;
+ tmp = vcpu->host_msrs[to];
+ vcpu->host_msrs[to] = vcpu->host_msrs[from];
+ vcpu->host_msrs[from] = tmp;
+}
+
+/*
* Set up the vmcs to automatically save and restore system
* msrs. Don't touch the 64-bit msrs if the guest is in legacy
* mode, as fiddling with msrs is very expensive.
*/
static void setup_msrs(struct kvm_vcpu *vcpu)
{
- int nr_skip, nr_good_msrs;
+ int index, save_nmsrs;
- if (is_long_mode(vcpu))
- nr_skip = NR_BAD_MSRS;
- else
- nr_skip = NR_64BIT_MSRS;
- nr_good_msrs = vcpu->nmsrs - nr_skip;
+ save_nmsrs = 0;
+ if (is_long_mode(vcpu)) {
+ index = __find_msr_index(vcpu, MSR_SYSCALL_MASK);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ index = __find_msr_index(vcpu, MSR_LSTAR);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ index = __find_msr_index(vcpu, MSR_CSTAR);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ index = __find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
+ if (index >= 0)
+ move_msr_up(vcpu, index, save_nmsrs++);
+ /*
+ * MSR_K6_STAR is only needed on long mode guests, and only
+ * if efer.sce is enabled.
+ */
+#ifdef X86_64
+ index = __find_msr_index(vcpu, MSR_K6_STAR);
+ if ((index >= 0) && (vcpu->shadow_efer & EFER_SCE))
+ move_msr_up(vcpu, index, save_nmsrs++);
+#endif
+ }
+ vcpu->save_nmsrs = save_nmsrs;
- /*
- * MSR_K6_STAR is only needed on long mode guests, and only
- * if efer.sce is enabled.
- */
- if (find_msr_entry(vcpu, MSR_K6_STAR)) {
- --nr_good_msrs;
#ifdef CONFIG_X86_64
- if (is_long_mode(vcpu) && (vcpu->shadow_efer & EFER_SCE))
- ++nr_good_msrs;
+ vcpu->msr_offset_kernel_gs_base =
+ __find_msr_index(vcpu, MSR_KERNEL_GS_BASE);
#endif
+ index = __find_msr_index(vcpu, MSR_EFER);
+ if (index >= 0)
+ save_nmsrs = 1;
+ else {
+ save_nmsrs = 0;
+ index = 0;
}
-
vmcs_writel(VM_ENTRY_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_STORE_ADDR,
- virt_to_phys(vcpu->guest_msrs + nr_skip));
+ virt_to_phys(vcpu->guest_msrs + index));
vmcs_writel(VM_EXIT_MSR_LOAD_ADDR,
- virt_to_phys(vcpu->host_msrs + nr_skip));
- vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
- vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */
+ virt_to_phys(vcpu->host_msrs + index));
+ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, save_nmsrs);
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, save_nmsrs);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, save_nmsrs);
}
/*
@@ -594,14 +620,6 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
case MSR_GS_BASE:
vmcs_writel(GUEST_GS_BASE, data);
break;
- case MSR_LSTAR:
- case MSR_SYSCALL_MASK:
- msr = find_msr_entry(vcpu, msr_index);
- if (msr)
- msr->data = data;
- if (vcpu->vmx_host_state.loaded)
- load_msrs(vcpu->guest_msrs, NR_BAD_MSRS);
- break;
#endif
case MSR_IA32_SYSENTER_CS:
vmcs_write32(GUEST_SYSENTER_CS, data);
@@ -619,6 +637,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32 msr_index, u64 data)
msr = find_msr_entry(vcpu, msr_index);
if (msr) {
msr->data = data;
+ if (vcpu->vmx_host_state.loaded)
+ load_msrs(vcpu->guest_msrs,vcpu->save_nmsrs);
break;
}
return kvm_set_msr_common(vcpu, msr_index, data);
@@ -1330,10 +1350,6 @@ static int vmx_vcpu_setup(struct kvm_vcpu *vcpu)
vcpu->host_msrs[j].reserved = 0;
vcpu->host_msrs[j].data = data;
vcpu->guest_msrs[j] = vcpu->host_msrs[j];
-#ifdef CONFIG_X86_64
- if (index == MSR_KERNEL_GS_BASE)
- msr_offset_kernel_gs_base = j;
-#endif
++vcpu->nmsrs;
}
[-- Attachment #3: Type: text/plain, Size: 286 bytes --]
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
[-- Attachment #4: Type: text/plain, Size: 186 bytes --]
_______________________________________________
kvm-devel mailing list
kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org
https://lists.sourceforge.net/lists/listinfo/kvm-devel
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC419-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-05-16 8:35 ` Avi Kivity
[not found] ` <464AC26D.8040901-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Avi Kivity @ 2007-05-16 8:35 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel
Dong, Eddie wrote:
> Alternative to swaping host/guest MSR entry array to avoid MSR bitmap.
> More suggestions?
>
This looks nicer, however I get an oops on fxrstor in
kvm_load_guest_fpu(), when running user/test/vmexit.flat compiled for
i386, on an x86_64 host.
Additionally:
- please include the changelog entry and a Signed-off-by: line
- the patch adds trailing whitespace
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <464AC26D.8040901-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
@ 2007-05-16 15:23 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A014E8AB2-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
0 siblings, 1 reply; 12+ messages in thread
From: Dong, Eddie @ 2007-05-16 15:23 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm-devel
Avi Kivity wrote:
>
> This looks nicer, however I get an oops on fxrstor in
> kvm_load_guest_fpu(), when running user/test/vmexit.flat compiled for
> i386, on an x86_64 host.
>
Avi:
Per your guideline, I did following under kvm-userspace/user:
1: make test/bootstrap
2: make test/vmexit.flat
3: ./kvmctl test/bootstrap test/vmexit.flat
I got:
rip 0000000000100008 rflags 00010006
cs 0008 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type b l 0 g 1 avl 0)
ds 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
es 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
ss 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
fs 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
gs 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
tr 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0)
gdt f0050/17
idt 0/0
cr0 60000011 cr2 0 cr3 0 cr4 0 cr8 0 efer 0
Aborted
dmesg shows:
[root@vt32-pae user]# dmesg
emulation failed but !mmio_needed? rip 100008 61 74 65 6e
handle_exception: emulate fail
Then I roll back to original commit
ca76d209b88c344fc6a8eac17057c0088a3d6940,
I get same error.
Later on I tried 32 on 64, I did:
1: make test/bootstrap
2: i386 make test/vmexit.flat
3: ./kvmctl test/bootstrap test/vmexit.flat
I got same thing.
Stranger, did u try these on previous commit?
thx,eddie
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH] lighweight VM Exit
[not found] ` <10EA09EFD8728347A513008B6B0DA77A014E8AB2-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
@ 2007-05-16 15:40 ` Avi Kivity
0 siblings, 0 replies; 12+ messages in thread
From: Avi Kivity @ 2007-05-16 15:40 UTC (permalink / raw)
To: Dong, Eddie; +Cc: kvm-devel
Dong, Eddie wrote:
> Avi Kivity wrote:
>
>> This looks nicer, however I get an oops on fxrstor in
>> kvm_load_guest_fpu(), when running user/test/vmexit.flat compiled for
>> i386, on an x86_64 host.
>>
>>
> Avi:
> Per your guideline, I did following under kvm-userspace/user:
> 1: make test/bootstrap
> 2: make test/vmexit.flat
> 3: ./kvmctl test/bootstrap test/vmexit.flat
> I got:
> rip 0000000000100008 rflags 00010006
> cs 0008 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type b l 0 g 1 avl 0)
> ds 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> es 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> ss 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> fs 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> gs 0010 (00000000/ffffffff p 1 dpl 0 db 1 s 1 type 3 l 0 g 1 avl 0)
> tr 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
> ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0)
> gdt f0050/17
> idt 0/0
> cr0 60000011 cr2 0 cr3 0 cr4 0 cr8 0 efer 0
> Aborted
> dmesg shows:
> [root@vt32-pae user]# dmesg
> emulation failed but !mmio_needed? rip 100008 61 74 65 6e
> handle_exception: emulate fail
>
>
> Then I roll back to original commit
> ca76d209b88c344fc6a8eac17057c0088a3d6940,
> I get same error.
>
>
> Later on I tried 32 on 64, I did:
> 1: make test/bootstrap
> 2: i386 make test/vmexit.flat
> 3: ./kvmctl test/bootstrap test/vmexit.flat
> I got same thing.
>
> Stranger, did u try these on previous commit?
> thx,eddie
>
>
Strange. On current HEAD, I get
[root@qu0054 kvm]# taskset 1 ./user/kvmctl user/test/bootstrap
user/test/vmexit.flat
rax 0000000000000000 rbx 0000000000000000 rcx 0000000000000000 rdx
0000000000000600
rsi 0000000000000000 rdi 0000000000000000 rsp 0000000000000000 rbp
0000000000000000
r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11
0000000000000000
r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15
0000000000000000
rip 000000000000fff0 rflags 00023002
cs f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
es 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
tr 0000 (07ffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0)
gdt 0/ffff
idt 0/ffff
cr0 60000010 cr2 0 cr3 0 cr4 0 cr8 0 efer 0
GUEST: vmexit latency: 5499
[root@qu0054 kvm]# taskset 1 ./user/kvmctl user/test/bootstrap
user/test/vmexit.flat
rax 0000000000000000 rbx 0000000000000000 rcx 0000000000000000 rdx
0000000000000600
rsi 0000000000000000 rdi 0000000000000000 rsp 0000000000000000 rbp
0000000000000000
r8 0000000000000000 r9 0000000000000000 r10 0000000000000000 r11
0000000000000000
r12 0000000000000000 r13 0000000000000000 r14 0000000000000000 r15
0000000000000000
rip 000000000000fff0 rflags 00023002
cs f000 (000f0000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
ds 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
es 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
ss 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
fs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
gs 0000 (00000000/0000ffff p 1 dpl 3 db 0 s 1 type 3 l 0 g 0 avl 0)
tr 0000 (07ffd000/00002088 p 1 dpl 0 db 0 s 0 type b l 0 g 0 avl 0)
ldt 0000 (00000000/0000ffff p 1 dpl 0 db 0 s 0 type 2 l 0 g 0 avl 0)
gdt 0/ffff
idt 0/ffff
cr0 60000010 cr2 0 cr3 0 cr4 0 cr8 0 efer 0
GUEST: vmexit latency: 4726
First run is with 64-bit guest, second with 32-bit guest.
Can you send your test/bootstrap and test/vmexit.flat?
--
error compiling committee.c: too many arguments to function
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2007-05-16 15:40 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-10 9:29 guest state leak into host Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016AB9E3-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-05-13 10:51 ` Avi Kivity
[not found] ` <4646EDB3.6030602-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-05-14 14:57 ` [PATCH] lighweight VM Exit (was:RE: guest state leak into host) Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FBD91-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-05-14 15:14 ` [PATCH] lighweight VM Exit Avi Kivity
[not found] ` <46487CD7.8070408-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-05-14 15:34 ` Christoph Hellwig
2007-05-15 6:10 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC1F2-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-05-15 7:39 ` Avi Kivity
[not found] ` <464963A3.7090505-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-05-15 10:12 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC3CD-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-05-15 13:48 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A016FC419-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-05-16 8:35 ` Avi Kivity
[not found] ` <464AC26D.8040901-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2007-05-16 15:23 ` Dong, Eddie
[not found] ` <10EA09EFD8728347A513008B6B0DA77A014E8AB2-wq7ZOvIWXbNpB2pF5aRoyrfspsVTdybXVpNB7YpNyf8@public.gmane.org>
2007-05-16 15:40 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox