* [PATCH 0/9] Some vmx vmexit optimizations
@ 2011-03-08 13:57 Avi Kivity
2011-03-08 13:57 ` [PATCH 1/9] KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw versions Avi Kivity
` (8 more replies)
0 siblings, 9 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
The following patchset drops almost 200 cycles from a VMCALL vmexit on a
(fairly old) Intel, by reducing some needless VMREADs. Should also speed
up instruction emulation by a small amount.
It's a little tricky, so copying lots of people.
Avi Kivity (9):
KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw
versions
KVM: VMX: Optimize vmx_get_rflags()
KVM: VMX: Optimize vmx_get_cpl()
KVM: VMX: Cache cpl
KVM: VMX: Avoid vmx_recover_nmi_blocking() when unneeded
KVM: VMX: Qualify check for host NMI
KVM: VMX: Refactor vmx_complete_atomic_exit()
KVM: VMX: Don't VMREAD VM_EXIT_INTR_INFO unconditionally
KVM: VMX: Use cached VM_EXIT_INTR_INFO in handle_exception
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/svm.c | 14 +++---
arch/x86/kvm/vmx.c | 79 +++++++++++++++++++++++++++++++--------
arch/x86/kvm/x86.c | 8 ++--
4 files changed, 76 insertions(+), 27 deletions(-)
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 1/9] KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw versions
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 2/9] KVM: VMX: Optimize vmx_get_rflags() Avi Kivity
` (7 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
Some rflags bits are owned by the host, not guest, so we need to use
kvm_get_rflags() to strip those bits away or kvm_set_rflags() to add them
back.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/svm.c | 14 +++++++-------
arch/x86/kvm/vmx.c | 2 +-
arch/x86/kvm/x86.c | 8 ++++----
3 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8d61df4..adb7f51 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -975,7 +975,7 @@ static void init_vmcb(struct vcpu_svm *svm)
svm_set_efer(&svm->vcpu, 0);
save->dr6 = 0xffff0ff0;
save->dr7 = 0x400;
- save->rflags = 2;
+ kvm_set_rflags(&svm->vcpu, 2);
save->rip = 0x0000fff0;
svm->vcpu.arch.regs[VCPU_REGS_RIP] = save->rip;
@@ -2125,7 +2125,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
nested_vmcb->save.cr3 = kvm_read_cr3(&svm->vcpu);
nested_vmcb->save.cr2 = vmcb->save.cr2;
nested_vmcb->save.cr4 = svm->vcpu.arch.cr4;
- nested_vmcb->save.rflags = vmcb->save.rflags;
+ nested_vmcb->save.rflags = kvm_get_rflags(&svm->vcpu);
nested_vmcb->save.rip = vmcb->save.rip;
nested_vmcb->save.rsp = vmcb->save.rsp;
nested_vmcb->save.rax = vmcb->save.rax;
@@ -2182,7 +2182,7 @@ static int nested_svm_vmexit(struct vcpu_svm *svm)
svm->vmcb->save.ds = hsave->save.ds;
svm->vmcb->save.gdtr = hsave->save.gdtr;
svm->vmcb->save.idtr = hsave->save.idtr;
- svm->vmcb->save.rflags = hsave->save.rflags;
+ kvm_set_rflags(&svm->vcpu, hsave->save.rflags);
svm_set_efer(&svm->vcpu, hsave->save.efer);
svm_set_cr0(&svm->vcpu, hsave->save.cr0 | X86_CR0_PE);
svm_set_cr4(&svm->vcpu, hsave->save.cr4);
@@ -2310,7 +2310,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
hsave->save.efer = svm->vcpu.arch.efer;
hsave->save.cr0 = kvm_read_cr0(&svm->vcpu);
hsave->save.cr4 = svm->vcpu.arch.cr4;
- hsave->save.rflags = vmcb->save.rflags;
+ hsave->save.rflags = kvm_get_rflags(&svm->vcpu);
hsave->save.rip = kvm_rip_read(&svm->vcpu);
hsave->save.rsp = vmcb->save.rsp;
hsave->save.rax = vmcb->save.rax;
@@ -2321,7 +2321,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
copy_vmcb_control_area(hsave, vmcb);
- if (svm->vmcb->save.rflags & X86_EFLAGS_IF)
+ if (kvm_get_rflags(&svm->vcpu) & X86_EFLAGS_IF)
svm->vcpu.arch.hflags |= HF_HIF_MASK;
else
svm->vcpu.arch.hflags &= ~HF_HIF_MASK;
@@ -2339,7 +2339,7 @@ static bool nested_svm_vmrun(struct vcpu_svm *svm)
svm->vmcb->save.ds = nested_vmcb->save.ds;
svm->vmcb->save.gdtr = nested_vmcb->save.gdtr;
svm->vmcb->save.idtr = nested_vmcb->save.idtr;
- svm->vmcb->save.rflags = nested_vmcb->save.rflags;
+ kvm_set_rflags(&svm->vcpu, nested_vmcb->save.rflags);
svm_set_efer(&svm->vcpu, nested_vmcb->save.efer);
svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0);
svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4);
@@ -3382,7 +3382,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK))
return 0;
- ret = !!(vmcb->save.rflags & X86_EFLAGS_IF);
+ ret = !!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF);
if (is_guest_mode(vcpu))
return ret && !(svm->vcpu.arch.hflags & HF_VINTR_MASK);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e2b8c6b..c4efc0a 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2100,7 +2100,7 @@ static int vmx_get_cpl(struct kvm_vcpu *vcpu)
if (!is_protmode(vcpu))
return 0;
- if (vmx_get_rflags(vcpu) & X86_EFLAGS_VM) /* if virtual 8086 */
+ if (kvm_get_rflags(vcpu) & X86_EFLAGS_VM) /* if virtual 8086 */
return 3;
return vmcs_read16(GUEST_CS_SELECTOR) & 3;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 785ae0c..3dbd0d7 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -4303,7 +4303,7 @@ static void init_emulate_ctxt(struct kvm_vcpu *vcpu)
kvm_x86_ops->get_cs_db_l_bits(vcpu, &cs_db, &cs_l);
vcpu->arch.emulate_ctxt.vcpu = vcpu;
- vcpu->arch.emulate_ctxt.eflags = kvm_x86_ops->get_rflags(vcpu);
+ vcpu->arch.emulate_ctxt.eflags = kvm_get_rflags(vcpu);
vcpu->arch.emulate_ctxt.eip = kvm_rip_read(vcpu);
vcpu->arch.emulate_ctxt.mode =
(!is_protmode(vcpu)) ? X86EMUL_MODE_REAL :
@@ -4333,7 +4333,7 @@ int kvm_inject_realmode_interrupt(struct kvm_vcpu *vcpu, int irq)
vcpu->arch.emulate_ctxt.eip = c->eip;
memcpy(vcpu->arch.regs, c->regs, sizeof c->regs);
kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip);
- kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
+ kvm_set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
if (irq == NMI_VECTOR)
vcpu->arch.nmi_pending = false;
@@ -4466,7 +4466,7 @@ restart:
r = EMULATE_DONE;
toggle_interruptibility(vcpu, vcpu->arch.emulate_ctxt.interruptibility);
- kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
+ kvm_set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
kvm_make_request(KVM_REQ_EVENT, vcpu);
memcpy(vcpu->arch.regs, c->regs, sizeof c->regs);
kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip);
@@ -5566,7 +5566,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason,
memcpy(vcpu->arch.regs, c->regs, sizeof c->regs);
kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip);
- kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
+ kvm_set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags);
kvm_make_request(KVM_REQ_EVENT, vcpu);
return EMULATE_DONE;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 2/9] KVM: VMX: Optimize vmx_get_rflags()
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
2011-03-08 13:57 ` [PATCH 1/9] KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw versions Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 3/9] KVM: VMX: Optimize vmx_get_cpl() Avi Kivity
` (6 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
If called several times within the same exit, return cached results.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx.c | 20 ++++++++++++++------
2 files changed, 15 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 37bd730..80f3070 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -118,6 +118,7 @@ enum kvm_reg {
enum kvm_reg_ex {
VCPU_EXREG_PDPTR = NR_VCPU_REGS,
VCPU_EXREG_CR3,
+ VCPU_EXREG_RFLAGS,
};
enum {
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c4efc0a..6118177 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -130,6 +130,7 @@ struct vcpu_vmx {
u8 fail;
u32 exit_intr_info;
u32 idt_vectoring_info;
+ ulong rflags;
struct shared_msr_entry *guest_msrs;
int nmsrs;
int save_nmsrs;
@@ -969,17 +970,23 @@ static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
{
unsigned long rflags, save_rflags;
- rflags = vmcs_readl(GUEST_RFLAGS);
- if (to_vmx(vcpu)->rmode.vm86_active) {
- rflags &= RMODE_GUEST_OWNED_EFLAGS_BITS;
- save_rflags = to_vmx(vcpu)->rmode.save_rflags;
- rflags |= save_rflags & ~RMODE_GUEST_OWNED_EFLAGS_BITS;
+ if (!test_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail)) {
+ __set_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail);
+ rflags = vmcs_readl(GUEST_RFLAGS);
+ if (to_vmx(vcpu)->rmode.vm86_active) {
+ rflags &= RMODE_GUEST_OWNED_EFLAGS_BITS;
+ save_rflags = to_vmx(vcpu)->rmode.save_rflags;
+ rflags |= save_rflags & ~RMODE_GUEST_OWNED_EFLAGS_BITS;
+ }
+ to_vmx(vcpu)->rflags = rflags;
}
- return rflags;
+ return to_vmx(vcpu)->rflags;
}
static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
{
+ __set_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail);
+ to_vmx(vcpu)->rflags = rflags;
if (to_vmx(vcpu)->rmode.vm86_active) {
to_vmx(vcpu)->rmode.save_rflags = rflags;
rflags |= X86_EFLAGS_IOPL | X86_EFLAGS_VM;
@@ -4107,6 +4114,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
);
vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP)
+ | (1 << VCPU_EXREG_RFLAGS)
| (1 << VCPU_EXREG_PDPTR)
| (1 << VCPU_EXREG_CR3));
vcpu->arch.regs_dirty = 0;
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 3/9] KVM: VMX: Optimize vmx_get_cpl()
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
2011-03-08 13:57 ` [PATCH 1/9] KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw versions Avi Kivity
2011-03-08 13:57 ` [PATCH 2/9] KVM: VMX: Optimize vmx_get_rflags() Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 4/9] KVM: VMX: Cache cpl Avi Kivity
` (5 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
In long mode, vm86 mode is disallowed, so we need not check for
it. Reading rflags.vm may require a VMREAD, so it is expensive.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6118177..82dbebd 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2107,7 +2107,8 @@ static int vmx_get_cpl(struct kvm_vcpu *vcpu)
if (!is_protmode(vcpu))
return 0;
- if (kvm_get_rflags(vcpu) & X86_EFLAGS_VM) /* if virtual 8086 */
+ if (!is_long_mode(vcpu)
+ && (kvm_get_rflags(vcpu) & X86_EFLAGS_VM)) /* if virtual 8086 */
return 3;
return vmcs_read16(GUEST_CS_SELECTOR) & 3;
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 4/9] KVM: VMX: Cache cpl
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
` (2 preceding siblings ...)
2011-03-08 13:57 ` [PATCH 3/9] KVM: VMX: Optimize vmx_get_cpl() Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 14:20 ` Gleb Natapov
2011-03-08 13:57 ` [PATCH 5/9] KVM: VMX: Avoid vmx_recover_nmi_blocking() when unneeded Avi Kivity
` (4 subsequent siblings)
8 siblings, 1 reply; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
We may read the cpl quite often in the same vmexit (instruction privilege
check, memory access checks for instruction and operands), so we gain
a bit if we cache the value.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/vmx.c | 17 ++++++++++++++++-
2 files changed, 17 insertions(+), 1 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 80f3070..4a2496d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -119,6 +119,7 @@ enum kvm_reg_ex {
VCPU_EXREG_PDPTR = NR_VCPU_REGS,
VCPU_EXREG_CR3,
VCPU_EXREG_RFLAGS,
+ VCPU_EXREG_CPL,
};
enum {
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 82dbebd..87e3d86 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -128,6 +128,7 @@ struct vcpu_vmx {
unsigned long host_rsp;
int launched;
u8 fail;
+ u8 cpl;
u32 exit_intr_info;
u32 idt_vectoring_info;
ulong rflags;
@@ -986,6 +987,7 @@ static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
{
__set_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail);
+ __clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
to_vmx(vcpu)->rflags = rflags;
if (to_vmx(vcpu)->rmode.vm86_active) {
to_vmx(vcpu)->rmode.save_rflags = rflags;
@@ -1992,6 +1994,7 @@ static void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
vmcs_writel(CR0_READ_SHADOW, cr0);
vmcs_writel(GUEST_CR0, hw_cr0);
vcpu->arch.cr0 = cr0;
+ __clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
}
static u64 construct_eptp(unsigned long root_hpa)
@@ -2102,7 +2105,7 @@ static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
return vmcs_readl(sf->base);
}
-static int vmx_get_cpl(struct kvm_vcpu *vcpu)
+static int __vmx_get_cpl(struct kvm_vcpu *vcpu)
{
if (!is_protmode(vcpu))
return 0;
@@ -2114,6 +2117,16 @@ static int vmx_get_cpl(struct kvm_vcpu *vcpu)
return vmcs_read16(GUEST_CS_SELECTOR) & 3;
}
+static int vmx_get_cpl(struct kvm_vcpu *vcpu)
+{
+ if (!test_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail)) {
+ __set_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
+ to_vmx(vcpu)->cpl = __vmx_get_cpl(vcpu);
+ }
+ return to_vmx(vcpu)->cpl;
+}
+
+
static u32 vmx_segment_access_rights(struct kvm_segment *var)
{
u32 ar;
@@ -2179,6 +2192,7 @@ static void vmx_set_segment(struct kvm_vcpu *vcpu,
ar |= 0x1; /* Accessed */
vmcs_write32(sf->ar_bytes, ar);
+ __clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
}
static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
@@ -4116,6 +4130,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP)
| (1 << VCPU_EXREG_RFLAGS)
+ | (1 << VCPU_EXREG_CPL)
| (1 << VCPU_EXREG_PDPTR)
| (1 << VCPU_EXREG_CR3));
vcpu->arch.regs_dirty = 0;
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 5/9] KVM: VMX: Avoid vmx_recover_nmi_blocking() when unneeded
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
` (3 preceding siblings ...)
2011-03-08 13:57 ` [PATCH 4/9] KVM: VMX: Cache cpl Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 6/9] KVM: VMX: Qualify check for host NMI Avi Kivity
` (3 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
When we haven't injected an interrupt, we don't need to recover
the nmi blocking state (since the guest can't set it by itself).
This allows us to avoid a VMREAD later on.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 16 +++++++++++++++-
1 files changed, 15 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 87e3d86..e84aa05 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -129,6 +129,7 @@ struct vcpu_vmx {
int launched;
u8 fail;
u8 cpl;
+ bool nmi_known_unmasked;
u32 exit_intr_info;
u32 idt_vectoring_info;
ulong rflags;
@@ -2942,6 +2943,7 @@ static void vmx_inject_nmi(struct kvm_vcpu *vcpu)
}
++vcpu->stat.nmi_injections;
+ vmx->nmi_known_unmasked = false;
if (vmx->rmode.vm86_active) {
if (kvm_inject_realmode_interrupt(vcpu, NMI_VECTOR) != EMULATE_DONE)
kvm_make_request(KVM_REQ_TRIPLE_FAULT, vcpu);
@@ -2966,6 +2968,8 @@ static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu)
{
if (!cpu_has_virtual_nmis())
return to_vmx(vcpu)->soft_vnmi_blocked;
+ if (to_vmx(vcpu)->nmi_known_unmasked)
+ return false;
return vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) & GUEST_INTR_STATE_NMI;
}
@@ -2979,6 +2983,7 @@ static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked)
vmx->vnmi_blocked_time = 0;
}
} else {
+ vmx->nmi_known_unmasked = !masked;
if (masked)
vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
GUEST_INTR_STATE_NMI);
@@ -3510,9 +3515,11 @@ static int handle_task_switch(struct kvm_vcpu *vcpu)
switch (type) {
case INTR_TYPE_NMI_INTR:
vcpu->arch.nmi_injected = false;
- if (cpu_has_virtual_nmis())
+ if (cpu_has_virtual_nmis()) {
vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
GUEST_INTR_STATE_NMI);
+ vmx->nmi_known_unmasked = false;
+ }
break;
case INTR_TYPE_EXT_INTR:
case INTR_TYPE_SOFT_INTR:
@@ -3899,6 +3906,8 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
idtv_info_valid = vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK;
if (cpu_has_virtual_nmis()) {
+ if (vmx->nmi_known_unmasked)
+ return;
unblock_nmi = (exit_intr_info & INTR_INFO_UNBLOCK_NMI) != 0;
vector = exit_intr_info & INTR_INFO_VECTOR_MASK;
/*
@@ -3915,6 +3924,10 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
vector != DF_VECTOR && !idtv_info_valid)
vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
GUEST_INTR_STATE_NMI);
+ else
+ vmx->nmi_known_unmasked =
+ !(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO)
+ & GUEST_INTR_STATE_NMI);
} else if (unlikely(vmx->soft_vnmi_blocked))
vmx->vnmi_blocked_time +=
ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
@@ -3953,6 +3966,7 @@ static void __vmx_complete_interrupts(struct vcpu_vmx *vmx,
*/
vmcs_clear_bits(GUEST_INTERRUPTIBILITY_INFO,
GUEST_INTR_STATE_NMI);
+ vmx->nmi_known_unmasked = true;
break;
case INTR_TYPE_SOFT_EXCEPTION:
vmx->vcpu.arch.event_exit_inst_len =
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 6/9] KVM: VMX: Qualify check for host NMI
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
` (4 preceding siblings ...)
2011-03-08 13:57 ` [PATCH 5/9] KVM: VMX: Avoid vmx_recover_nmi_blocking() when unneeded Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit() Avi Kivity
` (2 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
Check for the exit reason first; this allows us, later,
to avoid a VMREAD for VM_EXIT_INTR_INFO_FIELD.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e84aa05..6eadeaf 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3888,7 +3888,8 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
kvm_machine_check();
/* We need to handle NMIs before interrupts are enabled */
- if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR &&
+ if (vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI &&
+ (exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR &&
(exit_intr_info & INTR_INFO_VALID_MASK)) {
kvm_before_handle_nmi(&vmx->vcpu);
asm("int $2");
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit()
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
` (5 preceding siblings ...)
2011-03-08 13:57 ` [PATCH 6/9] KVM: VMX: Qualify check for host NMI Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-15 21:40 ` Marcelo Tosatti
2011-03-08 13:57 ` [PATCH 8/9] KVM: VMX: Don't VMREAD VM_EXIT_INTR_INFO unconditionally Avi Kivity
2011-03-08 13:57 ` [PATCH 9/9] KVM: VMX: Use cached VM_EXIT_INTR_INFO in handle_exception Avi Kivity
8 siblings, 1 reply; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
Move the exit reason checks to the front of the function, for early
exit in the common case.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 15 +++++++++------
1 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 6eadeaf..36fbe11 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3879,17 +3879,20 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
{
- u32 exit_intr_info = vmx->exit_intr_info;
+ u32 exit_intr_info;
+
+ if (!(vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY
+ || vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI))
+ return;
+
+ exit_intr_info = vmx->exit_intr_info;
/* Handle machine checks before interrupts are enabled */
- if ((vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY)
- || (vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI
- && is_machine_check(exit_intr_info)))
+ if (is_machine_check(exit_intr_info))
kvm_machine_check();
/* We need to handle NMIs before interrupts are enabled */
- if (vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI &&
- (exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR &&
+ if ((exit_intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR &&
(exit_intr_info & INTR_INFO_VALID_MASK)) {
kvm_before_handle_nmi(&vmx->vcpu);
asm("int $2");
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 8/9] KVM: VMX: Don't VMREAD VM_EXIT_INTR_INFO unconditionally
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
` (6 preceding siblings ...)
2011-03-08 13:57 ` [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit() Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 9/9] KVM: VMX: Use cached VM_EXIT_INTR_INFO in handle_exception Avi Kivity
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
Only read it if we're going to use it later.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 9 +++++++--
1 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 36fbe11..36c889e 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3885,6 +3885,7 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
|| vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI))
return;
+ vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
exit_intr_info = vmx->exit_intr_info;
/* Handle machine checks before interrupts are enabled */
@@ -3902,7 +3903,7 @@ static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
{
- u32 exit_intr_info = vmx->exit_intr_info;
+ u32 exit_intr_info;
bool unblock_nmi;
u8 vector;
bool idtv_info_valid;
@@ -3912,6 +3913,11 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx)
if (cpu_has_virtual_nmis()) {
if (vmx->nmi_known_unmasked)
return;
+ /*
+ * Can't use vmx->exit_intr_info since we're not sure what
+ * the exit reason is.
+ */
+ exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
unblock_nmi = (exit_intr_info & INTR_INFO_UNBLOCK_NMI) != 0;
vector = exit_intr_info & INTR_INFO_VECTOR_MASK;
/*
@@ -4159,7 +4165,6 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
vmx->launched = 1;
vmx->exit_reason = vmcs_read32(VM_EXIT_REASON);
- vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
vmx_complete_atomic_exit(vmx);
vmx_recover_nmi_blocking(vmx);
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 9/9] KVM: VMX: Use cached VM_EXIT_INTR_INFO in handle_exception
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
` (7 preceding siblings ...)
2011-03-08 13:57 ` [PATCH 8/9] KVM: VMX: Don't VMREAD VM_EXIT_INTR_INFO unconditionally Avi Kivity
@ 2011-03-08 13:57 ` Avi Kivity
8 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 13:57 UTC (permalink / raw)
To: Marcelo Tosatti, kvm, Gleb Natapov, Jan Kiszka
vmx_complete_atomic_exit() cached it for us, so we can use it here.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 36c889e..4b4dfc2 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3101,7 +3101,7 @@ static int handle_exception(struct kvm_vcpu *vcpu)
enum emulation_result er;
vect_info = vmx->idt_vectoring_info;
- intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
+ intr_info = vmx->exit_intr_info;
if (is_machine_check(intr_info))
return handle_machine_check(vcpu);
--
1.7.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 4/9] KVM: VMX: Cache cpl
2011-03-08 13:57 ` [PATCH 4/9] KVM: VMX: Cache cpl Avi Kivity
@ 2011-03-08 14:20 ` Gleb Natapov
2011-03-08 14:29 ` Avi Kivity
0 siblings, 1 reply; 15+ messages in thread
From: Gleb Natapov @ 2011-03-08 14:20 UTC (permalink / raw)
To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, Jan Kiszka
On Tue, Mar 08, 2011 at 03:57:40PM +0200, Avi Kivity wrote:
> We may read the cpl quite often in the same vmexit (instruction privilege
> check, memory access checks for instruction and operands), so we gain
> a bit if we cache the value.
>
Shouldn't VCPU_EXREG_CPL be cleared in vmx_set_efer too?
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/vmx.c | 17 ++++++++++++++++-
> 2 files changed, 17 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 80f3070..4a2496d 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -119,6 +119,7 @@ enum kvm_reg_ex {
> VCPU_EXREG_PDPTR = NR_VCPU_REGS,
> VCPU_EXREG_CR3,
> VCPU_EXREG_RFLAGS,
> + VCPU_EXREG_CPL,
> };
>
> enum {
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 82dbebd..87e3d86 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -128,6 +128,7 @@ struct vcpu_vmx {
> unsigned long host_rsp;
> int launched;
> u8 fail;
> + u8 cpl;
> u32 exit_intr_info;
> u32 idt_vectoring_info;
> ulong rflags;
> @@ -986,6 +987,7 @@ static unsigned long vmx_get_rflags(struct kvm_vcpu *vcpu)
> static void vmx_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags)
> {
> __set_bit(VCPU_EXREG_RFLAGS, (ulong *)&vcpu->arch.regs_avail);
> + __clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
> to_vmx(vcpu)->rflags = rflags;
> if (to_vmx(vcpu)->rmode.vm86_active) {
> to_vmx(vcpu)->rmode.save_rflags = rflags;
> @@ -1992,6 +1994,7 @@ static void vmx_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
> vmcs_writel(CR0_READ_SHADOW, cr0);
> vmcs_writel(GUEST_CR0, hw_cr0);
> vcpu->arch.cr0 = cr0;
> + __clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
> }
>
> static u64 construct_eptp(unsigned long root_hpa)
> @@ -2102,7 +2105,7 @@ static u64 vmx_get_segment_base(struct kvm_vcpu *vcpu, int seg)
> return vmcs_readl(sf->base);
> }
>
> -static int vmx_get_cpl(struct kvm_vcpu *vcpu)
> +static int __vmx_get_cpl(struct kvm_vcpu *vcpu)
> {
> if (!is_protmode(vcpu))
> return 0;
> @@ -2114,6 +2117,16 @@ static int vmx_get_cpl(struct kvm_vcpu *vcpu)
> return vmcs_read16(GUEST_CS_SELECTOR) & 3;
> }
>
> +static int vmx_get_cpl(struct kvm_vcpu *vcpu)
> +{
> + if (!test_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail)) {
> + __set_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
> + to_vmx(vcpu)->cpl = __vmx_get_cpl(vcpu);
> + }
> + return to_vmx(vcpu)->cpl;
> +}
> +
> +
> static u32 vmx_segment_access_rights(struct kvm_segment *var)
> {
> u32 ar;
> @@ -2179,6 +2192,7 @@ static void vmx_set_segment(struct kvm_vcpu *vcpu,
> ar |= 0x1; /* Accessed */
>
> vmcs_write32(sf->ar_bytes, ar);
> + __clear_bit(VCPU_EXREG_CPL, (ulong *)&vcpu->arch.regs_avail);
> }
>
> static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l)
> @@ -4116,6 +4130,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
>
> vcpu->arch.regs_avail = ~((1 << VCPU_REGS_RIP) | (1 << VCPU_REGS_RSP)
> | (1 << VCPU_EXREG_RFLAGS)
> + | (1 << VCPU_EXREG_CPL)
> | (1 << VCPU_EXREG_PDPTR)
> | (1 << VCPU_EXREG_CR3));
> vcpu->arch.regs_dirty = 0;
> --
> 1.7.1
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 4/9] KVM: VMX: Cache cpl
2011-03-08 14:20 ` Gleb Natapov
@ 2011-03-08 14:29 ` Avi Kivity
2011-03-08 14:38 ` Gleb Natapov
0 siblings, 1 reply; 15+ messages in thread
From: Avi Kivity @ 2011-03-08 14:29 UTC (permalink / raw)
To: Gleb Natapov; +Cc: Marcelo Tosatti, kvm, Jan Kiszka
On 03/08/2011 04:20 PM, Gleb Natapov wrote:
> On Tue, Mar 08, 2011 at 03:57:40PM +0200, Avi Kivity wrote:
> > We may read the cpl quite often in the same vmexit (instruction privilege
> > check, memory access checks for instruction and operands), so we gain
> > a bit if we cache the value.
> >
> Shouldn't VCPU_EXREG_CPL be cleared in vmx_set_efer too?
I anticipated that question (which really means that a comment is
needed). Flipping EFER.LMA is done by flipping CR0.PG (and doesn't
affect cpl. in any case).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 4/9] KVM: VMX: Cache cpl
2011-03-08 14:29 ` Avi Kivity
@ 2011-03-08 14:38 ` Gleb Natapov
0 siblings, 0 replies; 15+ messages in thread
From: Gleb Natapov @ 2011-03-08 14:38 UTC (permalink / raw)
To: Avi Kivity; +Cc: Marcelo Tosatti, kvm, Jan Kiszka
On Tue, Mar 08, 2011 at 04:29:10PM +0200, Avi Kivity wrote:
> On 03/08/2011 04:20 PM, Gleb Natapov wrote:
> >On Tue, Mar 08, 2011 at 03:57:40PM +0200, Avi Kivity wrote:
> >> We may read the cpl quite often in the same vmexit (instruction privilege
> >> check, memory access checks for instruction and operands), so we gain
> >> a bit if we cache the value.
> >>
> >Shouldn't VCPU_EXREG_CPL be cleared in vmx_set_efer too?
>
> I anticipated that question (which really means that a comment is
> needed). Flipping EFER.LMA is done by flipping CR0.PG (and doesn't
> affect cpl. in any case).
>
Ah, so you can't clear LMA by writing to EFER.
--
Gleb.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit()
2011-03-08 13:57 ` [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit() Avi Kivity
@ 2011-03-15 21:40 ` Marcelo Tosatti
2011-03-21 8:55 ` Avi Kivity
0 siblings, 1 reply; 15+ messages in thread
From: Marcelo Tosatti @ 2011-03-15 21:40 UTC (permalink / raw)
To: Avi Kivity; +Cc: kvm, Gleb Natapov, Jan Kiszka
On Tue, Mar 08, 2011 at 03:57:43PM +0200, Avi Kivity wrote:
> Move the exit reason checks to the front of the function, for early
> exit in the common case.
>
> Signed-off-by: Avi Kivity <avi@redhat.com>
> ---
> arch/x86/kvm/vmx.c | 15 +++++++++------
> 1 files changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 6eadeaf..36fbe11 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3879,17 +3879,20 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
>
> static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
> {
> - u32 exit_intr_info = vmx->exit_intr_info;
> + u32 exit_intr_info;
> +
> + if (!(vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY
> + || vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI))
> + return;
> +
> + exit_intr_info = vmx->exit_intr_info;
So you're saving a vmread on exception and mce_during_vmentry exits?
Managing nmi_known_unmasked appears tricky... perhaps worthwhile to
have it in a helper that sets the bit in interruptibility info +
nmi_known_unmasked.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit()
2011-03-15 21:40 ` Marcelo Tosatti
@ 2011-03-21 8:55 ` Avi Kivity
0 siblings, 0 replies; 15+ messages in thread
From: Avi Kivity @ 2011-03-21 8:55 UTC (permalink / raw)
To: Marcelo Tosatti; +Cc: kvm, Gleb Natapov, Jan Kiszka
On 03/15/2011 11:40 PM, Marcelo Tosatti wrote:
> On Tue, Mar 08, 2011 at 03:57:43PM +0200, Avi Kivity wrote:
> > Move the exit reason checks to the front of the function, for early
> > exit in the common case.
> >
>
> >
> > static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx)
> > {
> > - u32 exit_intr_info = vmx->exit_intr_info;
> > + u32 exit_intr_info;
> > +
> > + if (!(vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY
> > + || vmx->exit_reason == EXIT_REASON_EXCEPTION_NMI))
> > + return;
> > +
> > + exit_intr_info = vmx->exit_intr_info;
>
> So you're saving a vmread on exception and mce_during_vmentry exits?
Well, the real goal is patch 8, which removes the unconditional read of
VM_EXIT_INTR_INFO. Right now we have 2 reads on exceptions and one on
other exits, we drop both by 1.
> Managing nmi_known_unmasked appears tricky... perhaps worthwhile to
> have it in a helper that sets the bit in interruptibility info +
> nmi_known_unmasked.
I'll look into it, likely as an add-on patch.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-03-21 8:55 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-08 13:57 [PATCH 0/9] Some vmx vmexit optimizations Avi Kivity
2011-03-08 13:57 ` [PATCH 1/9] KVM: Use kvm_get_rflags() and kvm_set_rflags() instead of the raw versions Avi Kivity
2011-03-08 13:57 ` [PATCH 2/9] KVM: VMX: Optimize vmx_get_rflags() Avi Kivity
2011-03-08 13:57 ` [PATCH 3/9] KVM: VMX: Optimize vmx_get_cpl() Avi Kivity
2011-03-08 13:57 ` [PATCH 4/9] KVM: VMX: Cache cpl Avi Kivity
2011-03-08 14:20 ` Gleb Natapov
2011-03-08 14:29 ` Avi Kivity
2011-03-08 14:38 ` Gleb Natapov
2011-03-08 13:57 ` [PATCH 5/9] KVM: VMX: Avoid vmx_recover_nmi_blocking() when unneeded Avi Kivity
2011-03-08 13:57 ` [PATCH 6/9] KVM: VMX: Qualify check for host NMI Avi Kivity
2011-03-08 13:57 ` [PATCH 7/9] KVM: VMX: Refactor vmx_complete_atomic_exit() Avi Kivity
2011-03-15 21:40 ` Marcelo Tosatti
2011-03-21 8:55 ` Avi Kivity
2011-03-08 13:57 ` [PATCH 8/9] KVM: VMX: Don't VMREAD VM_EXIT_INTR_INFO unconditionally Avi Kivity
2011-03-08 13:57 ` [PATCH 9/9] KVM: VMX: Use cached VM_EXIT_INTR_INFO in handle_exception Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox