* [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3)
@ 2009-05-20 11:17 Avi Kivity
2009-05-20 11:17 ` [PATCH 01/46] KVM: VMX: Make flexpriority module parameter reflect hardware capability Avi Kivity
` (45 more replies)
0 siblings, 46 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:17 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
Second batch of the 2.6.31 merge window submission. Note there will only be
three batches instead of four as advertised in the previous message due to a
divide failure. Please review.
Avi Kivity (3):
KVM: VMX: Make flexpriority module parameter reflect hardware
capability
KVM: MMU: Use different shadows when EFER.NXE changes
KVM: Replace kvmclock open-coded get_cpu_var() with the real thing
Dong, Eddie (2):
KVM: MMU: Emulate #PF error code of reserved bits violation
KVM: Use rsvd_bits_mask in load_pdptrs()
Eddie Dong (1):
KVM: MMU: Fix comment in page_fault()
Gleb Natapov (22):
KVM: VMX: Fix handling of a fault during NMI unblocked due to IRET
KVM: VMX: Rewrite vmx_complete_interrupt()'s twisted maze of if()
statements
KVM: VMX: Do not zero idt_vectoring_info in
vmx_complete_interrupts().
KVM: Fix task switch back link handling.
KVM: Fix unneeded instruction skipping during task switching.
KVM: x86 emulator: fix call near emulation
KVM: x86 emulator: Add decoding of 16bit second immediate argument
KVM: x86 emulator: Add lcall decoding
KVM: x86 emulator: Complete ljmp decoding at decode stage
KVM: x86 emulator: Complete short/near jcc decoding in decode stage
KVM: x86 emulator: Complete decoding of call near in decode stage
KVM: x86 emulator: Add unsigned byte immediate decode
KVM: x86 emulator: Completely decode in/out at decoding stage
KVM: x86 emulator: Decode soft interrupt instructions
KVM: x86 emulator: Add new mode of instruction emulation: skip
KVM: SVM: Skip instruction on a task switch only when appropriate
KVM: Make kvm_cpu_(has|get)_interrupt() work for userspace irqchip
too
KVM: VMX: Consolidate userspace and kernel interrupt injection for
VMX
KVM: VMX: Cleanup vmx_intr_assist()
KVM: Use kvm_arch_interrupt_allowed() instead of checking
interrupt_window_open directly
KVM: SVM: Coalesce userspace/kernel irqchip interrupt injection logic
KVM: Remove exception_injected() callback.
Jan Kiszka (1):
KVM: MMU: Fix auditing code
Jes Sorensen (5):
KVM: ia64: Don't hold slots_lock in guest mode
KVM: ia64: remove empty function vti_vcpu_load()
KVM: ia64: restore irq state before calling kvm_vcpu_init
KVM: ia64: preserve int status through call to kvm_insert_vmm_mapping
KVM: ia64: ia64 vcpu_reset() do not call kmalloc() with irqs disabled
Marcelo Tosatti (3):
KVM: PIT: fix count read and mode 0 handling
KVM: MMU: remove global page optimization logic
KVM: x86: check for cr3 validity in ioctl_set_sregs
Sheng Yang (4):
KVM: VMX: Correct wrong vmcs field sizes
KVM: VMX: Clean up Flex Priority related
KVM: VMX: Fix feature testing
KVM: MMU: Discard reserved bits checking on PDE bit 7-8
Wei Yongjun (1):
KVM: remove pointless conditional before kfree() in lapic
initialization
Xiantao Zhang (1):
KVM: ia64: Flush all TLBs once guest's memory mapping changes.
Yang Zhang (1):
KVM: ia64: enable external interrupt in vmm
Zhang, Xiantao (1):
KVM: ia64: make kvm depend on CONFIG_MODULES.
nathan binkert (1):
KVM: Make kvm header C++ friendly
arch/ia64/kvm/Kconfig | 2 +-
arch/ia64/kvm/kvm-ia64.c | 82 ++++++-----
arch/ia64/kvm/process.c | 5 +
arch/ia64/kvm/vmm_ivt.S | 18 ++--
arch/ia64/kvm/vtlb.c | 3 +-
arch/x86/include/asm/kvm_host.h | 11 +-
arch/x86/include/asm/svm.h | 1 +
arch/x86/kvm/i8254.c | 26 ++--
arch/x86/kvm/irq.c | 7 +
arch/x86/kvm/mmu.c | 136 +++++++++++-------
arch/x86/kvm/mmu.h | 5 +
arch/x86/kvm/paging_tmpl.h | 15 ++-
arch/x86/kvm/svm.c | 226 ++++++++++++++---------------
arch/x86/kvm/vmx.c | 303 +++++++++++++++++++--------------------
arch/x86/kvm/x86.c | 105 ++++++++++----
arch/x86/kvm/x86_emulate.c | 121 ++++++----------
include/linux/kvm.h | 6 +-
17 files changed, 569 insertions(+), 503 deletions(-)
^ permalink raw reply [flat|nested] 47+ messages in thread
* [PATCH 01/46] KVM: VMX: Make flexpriority module parameter reflect hardware capability
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
@ 2009-05-20 11:17 ` Avi Kivity
2009-05-20 11:17 ` [PATCH 02/46] KVM: VMX: Correct wrong vmcs field sizes Avi Kivity
` (44 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:17 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
If the hardware does not support flexpriority, zero the module parameter.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b9e06b0..37ae13d 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -237,9 +237,7 @@ static inline int cpu_has_secondary_exec_ctrls(void)
static inline bool cpu_has_vmx_virtualize_apic_accesses(void)
{
- return flexpriority_enabled
- && (vmcs_config.cpu_based_2nd_exec_ctrl &
- SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES);
+ return flexpriority_enabled;
}
static inline int cpu_has_vmx_invept_individual_addr(void)
@@ -1203,6 +1201,9 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
if (!cpu_has_vmx_ept())
enable_ept = 0;
+ if (!(vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))
+ flexpriority_enabled = 0;
+
min = 0;
#ifdef CONFIG_X86_64
min |= VM_EXIT_HOST_ADDR_SPACE_SIZE;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 02/46] KVM: VMX: Correct wrong vmcs field sizes
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
2009-05-20 11:17 ` [PATCH 01/46] KVM: VMX: Make flexpriority module parameter reflect hardware capability Avi Kivity
@ 2009-05-20 11:17 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 03/46] KVM: MMU: Fix comment in page_fault() Avi Kivity
` (43 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:17 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Sheng Yang <sheng@linux.intel.com>
EXIT_QUALIFICATION and GUEST_LINEAR_ADDRESS are natural width, not 64-bit.
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 37ae13d..aba41ae 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2991,7 +2991,7 @@ static int handle_vmcall(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int handle_invlpg(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u64 exit_qualification = vmcs_read64(EXIT_QUALIFICATION);
+ unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
kvm_mmu_invlpg(vcpu, exit_qualification);
skip_emulated_instruction(vcpu);
@@ -3007,11 +3007,11 @@ static int handle_wbinvd(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int handle_apic_access(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u64 exit_qualification;
+ unsigned long exit_qualification;
enum emulation_result er;
unsigned long offset;
- exit_qualification = vmcs_read64(EXIT_QUALIFICATION);
+ exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
offset = exit_qualification & 0xffful;
er = emulate_instruction(vcpu, kvm_run, 0, 0, 0);
@@ -3062,11 +3062,11 @@ static int handle_task_switch(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
static int handle_ept_violation(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
- u64 exit_qualification;
+ unsigned long exit_qualification;
gpa_t gpa;
int gla_validity;
- exit_qualification = vmcs_read64(EXIT_QUALIFICATION);
+ exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
if (exit_qualification & (1 << 6)) {
printk(KERN_ERR "EPT: GPA exceeds GAW!\n");
@@ -3078,7 +3078,7 @@ static int handle_ept_violation(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
printk(KERN_ERR "EPT: Handling EPT violation failed!\n");
printk(KERN_ERR "EPT: GPA: 0x%lx, GVA: 0x%lx\n",
(long unsigned int)vmcs_read64(GUEST_PHYSICAL_ADDRESS),
- (long unsigned int)vmcs_read64(GUEST_LINEAR_ADDRESS));
+ vmcs_readl(GUEST_LINEAR_ADDRESS));
printk(KERN_ERR "EPT: Exit qualification is 0x%lx\n",
(long unsigned int)exit_qualification);
kvm_run->exit_reason = KVM_EXIT_UNKNOWN;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 03/46] KVM: MMU: Fix comment in page_fault()
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
2009-05-20 11:17 ` [PATCH 01/46] KVM: VMX: Make flexpriority module parameter reflect hardware capability Avi Kivity
2009-05-20 11:17 ` [PATCH 02/46] KVM: VMX: Correct wrong vmcs field sizes Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 04/46] KVM: ia64: enable external interrupt in vmm Avi Kivity
` (42 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Eddie Dong <eddie.dong@intel.com>
The original one is for the code before refactoring.
Signed-off-by: Yaozu (Eddie) Dong <eddie.dong@intel.com>
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/paging_tmpl.h | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 855eb71..eae9499 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -379,7 +379,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr,
return r;
/*
- * Look up the shadow pte for the faulting address.
+ * Look up the guest pte for the faulting address.
*/
r = FNAME(walk_addr)(&walker, vcpu, addr, write_fault, user_fault,
fetch_fault);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 04/46] KVM: ia64: enable external interrupt in vmm
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (2 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 03/46] KVM: MMU: Fix comment in page_fault() Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 05/46] KVM: MMU: Emulate #PF error code of reserved bits violation Avi Kivity
` (41 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Yang Zhang <yang.zhang@intel.com>
Currently, the interrupt enable bit is cleared when in
the vmm. This patch sets the bit and the external interrupts can
be dealt with when in the vmm. This improves the I/O performance.
Signed-off-by: Yang Zhang <yang.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/process.c | 5 +++++
arch/ia64/kvm/vmm_ivt.S | 18 +++++++++---------
arch/ia64/kvm/vtlb.c | 3 ++-
3 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/ia64/kvm/process.c b/arch/ia64/kvm/process.c
index b1dc809..a8f84da 100644
--- a/arch/ia64/kvm/process.c
+++ b/arch/ia64/kvm/process.c
@@ -652,20 +652,25 @@ void kvm_ia64_handle_break(unsigned long ifa, struct kvm_pt_regs *regs,
unsigned long isr, unsigned long iim)
{
struct kvm_vcpu *v = current_vcpu;
+ long psr;
if (ia64_psr(regs)->cpl == 0) {
/* Allow hypercalls only when cpl = 0. */
if (iim == DOMN_PAL_REQUEST) {
+ local_irq_save(psr);
set_pal_call_data(v);
vmm_transition(v);
get_pal_call_result(v);
vcpu_increment_iip(v);
+ local_irq_restore(psr);
return;
} else if (iim == DOMN_SAL_REQUEST) {
+ local_irq_save(psr);
set_sal_call_data(v);
vmm_transition(v);
get_sal_call_result(v);
vcpu_increment_iip(v);
+ local_irq_restore(psr);
return;
}
}
diff --git a/arch/ia64/kvm/vmm_ivt.S b/arch/ia64/kvm/vmm_ivt.S
index 3ef1a01..40920c6 100644
--- a/arch/ia64/kvm/vmm_ivt.S
+++ b/arch/ia64/kvm/vmm_ivt.S
@@ -95,7 +95,7 @@ GLOBAL_ENTRY(kvm_vmm_panic)
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15) ssm psr.i // restore psr.i
+ (p15) ssm psr.i // restore psr.
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@@ -249,7 +249,7 @@ ENTRY(kvm_break_fault)
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15)ssm psr.i // restore psr.i
+ (p15)ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@@ -439,7 +439,7 @@ kvm_dispatch_vexirq:
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15) ssm psr.i // restore psr.i
+ (p15) ssm psr.i // restore psr.i
adds r3=8,r2 // set up second base pointer
;;
KVM_SAVE_REST
@@ -819,7 +819,7 @@ ENTRY(kvm_dtlb_miss_dispatch)
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15) ssm psr.i // restore psr.i
+ (p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor_prepare),gp
;;
KVM_SAVE_REST
@@ -842,7 +842,7 @@ ENTRY(kvm_itlb_miss_dispatch)
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15) ssm psr.i // restore psr.i
+ (p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@@ -871,7 +871,7 @@ ENTRY(kvm_dispatch_reflection)
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15) ssm psr.i // restore psr.i
+ (p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@@ -898,7 +898,7 @@ ENTRY(kvm_dispatch_virtualization_fault)
;;
srlz.i // guarantee that interruption collection is on
;;
- //(p15) ssm psr.i // restore psr.i
+ (p15) ssm psr.i // restore psr.i
addl r14=@gprel(ia64_leave_hypervisor_prepare),gp
;;
KVM_SAVE_REST
@@ -920,7 +920,7 @@ ENTRY(kvm_dispatch_interrupt)
;;
srlz.i
;;
- //(p15) ssm psr.i
+ (p15) ssm psr.i
addl r14=@gprel(ia64_leave_hypervisor),gp
;;
KVM_SAVE_REST
@@ -1333,7 +1333,7 @@ hostret = r24
;;
(p7) srlz.i
;;
-//(p6) ssm psr.i
+(p6) ssm psr.i
;;
mov rp=rpsave
mov ar.pfs=pfssave
diff --git a/arch/ia64/kvm/vtlb.c b/arch/ia64/kvm/vtlb.c
index 2c2501f..4290a42 100644
--- a/arch/ia64/kvm/vtlb.c
+++ b/arch/ia64/kvm/vtlb.c
@@ -254,7 +254,8 @@ u64 guest_vhpt_lookup(u64 iha, u64 *pte)
"(p7) st8 [%2]=r9;;"
"ssm psr.ic;;"
"srlz.d;;"
- /* "ssm psr.i;;" Once interrupts in vmm open, need fix*/
+ "ssm psr.i;;"
+ "srlz.d;;"
: "=r"(ret) : "r"(iha), "r"(pte):"memory");
return ret;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 05/46] KVM: MMU: Emulate #PF error code of reserved bits violation
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (3 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 04/46] KVM: ia64: enable external interrupt in vmm Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 06/46] KVM: MMU: Use different shadows when EFER.NXE changes Avi Kivity
` (40 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Dong, Eddie <eddie.dong@intel.com>
Detect, indicate, and propagate page faults where reserved bits are set.
Take care to handle the different paging modes, each of which has different
sets of reserved bits.
[avi: fix pte reserved bits for efer.nxe=0]
Signed-off-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 2 +
arch/x86/kvm/mmu.c | 69 +++++++++++++++++++++++++++++++++++++++
arch/x86/kvm/paging_tmpl.h | 7 ++++
arch/x86/kvm/x86.c | 10 ++++++
4 files changed, 88 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8351c4d..548b97d 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -261,6 +261,7 @@ struct kvm_mmu {
union kvm_mmu_page_role base_role;
u64 *pae_root;
+ u64 rsvd_bits_mask[2][4];
};
struct kvm_vcpu_arch {
@@ -791,5 +792,6 @@ asmlinkage void kvm_handle_fault_on_reboot(void);
#define KVM_ARCH_WANT_MMU_NOTIFIER
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
int kvm_age_hva(struct kvm *kvm, unsigned long hva);
+int cpuid_maxphyaddr(struct kvm_vcpu *vcpu);
#endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9256484..24f5a57 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -126,6 +126,7 @@ module_param(oos_shadow, bool, 0644);
#define PFERR_PRESENT_MASK (1U << 0)
#define PFERR_WRITE_MASK (1U << 1)
#define PFERR_USER_MASK (1U << 2)
+#define PFERR_RSVD_MASK (1U << 3)
#define PFERR_FETCH_MASK (1U << 4)
#define PT_DIRECTORY_LEVEL 2
@@ -179,6 +180,11 @@ static u64 __read_mostly shadow_accessed_mask;
static u64 __read_mostly shadow_dirty_mask;
static u64 __read_mostly shadow_mt_mask;
+static inline u64 rsvd_bits(int s, int e)
+{
+ return ((1ULL << (e - s + 1)) - 1) << s;
+}
+
void kvm_mmu_set_nonpresent_ptes(u64 trap_pte, u64 notrap_pte)
{
shadow_trap_nonpresent_pte = trap_pte;
@@ -2151,6 +2157,14 @@ static void paging_free(struct kvm_vcpu *vcpu)
nonpaging_free(vcpu);
}
+static bool is_rsvd_bits_set(struct kvm_vcpu *vcpu, u64 gpte, int level)
+{
+ int bit7;
+
+ bit7 = (gpte >> 7) & 1;
+ return (gpte & vcpu->arch.mmu.rsvd_bits_mask[bit7][level-1]) != 0;
+}
+
#define PTTYPE 64
#include "paging_tmpl.h"
#undef PTTYPE
@@ -2159,6 +2173,55 @@ static void paging_free(struct kvm_vcpu *vcpu)
#include "paging_tmpl.h"
#undef PTTYPE
+static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
+{
+ struct kvm_mmu *context = &vcpu->arch.mmu;
+ int maxphyaddr = cpuid_maxphyaddr(vcpu);
+ u64 exb_bit_rsvd = 0;
+
+ if (!is_nx(vcpu))
+ exb_bit_rsvd = rsvd_bits(63, 63);
+ switch (level) {
+ case PT32_ROOT_LEVEL:
+ /* no rsvd bits for 2 level 4K page table entries */
+ context->rsvd_bits_mask[0][1] = 0;
+ context->rsvd_bits_mask[0][0] = 0;
+ if (is_cpuid_PSE36())
+ /* 36bits PSE 4MB page */
+ context->rsvd_bits_mask[1][1] = rsvd_bits(17, 21);
+ else
+ /* 32 bits PSE 4MB page */
+ context->rsvd_bits_mask[1][1] = rsvd_bits(13, 21);
+ context->rsvd_bits_mask[1][0] = ~0ull;
+ break;
+ case PT32E_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 62); /* PDE */
+ context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 62); /* PTE */
+ context->rsvd_bits_mask[1][1] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 62) |
+ rsvd_bits(13, 20); /* large page */
+ context->rsvd_bits_mask[1][0] = ~0ull;
+ break;
+ case PT64_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][3] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51) | rsvd_bits(7, 8);
+ context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51) | rsvd_bits(7, 8);
+ context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51) | rsvd_bits(7, 8);
+ context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51);
+ context->rsvd_bits_mask[1][3] = context->rsvd_bits_mask[0][3];
+ context->rsvd_bits_mask[1][2] = context->rsvd_bits_mask[0][2];
+ context->rsvd_bits_mask[1][1] = exb_bit_rsvd |
+ rsvd_bits(maxphyaddr, 51) | rsvd_bits(13, 20);
+ context->rsvd_bits_mask[1][0] = ~0ull;
+ break;
+ }
+}
+
static int paging64_init_context_common(struct kvm_vcpu *vcpu, int level)
{
struct kvm_mmu *context = &vcpu->arch.mmu;
@@ -2179,6 +2242,7 @@ static int paging64_init_context_common(struct kvm_vcpu *vcpu, int level)
static int paging64_init_context(struct kvm_vcpu *vcpu)
{
+ reset_rsvds_bits_mask(vcpu, PT64_ROOT_LEVEL);
return paging64_init_context_common(vcpu, PT64_ROOT_LEVEL);
}
@@ -2186,6 +2250,7 @@ static int paging32_init_context(struct kvm_vcpu *vcpu)
{
struct kvm_mmu *context = &vcpu->arch.mmu;
+ reset_rsvds_bits_mask(vcpu, PT32_ROOT_LEVEL);
context->new_cr3 = paging_new_cr3;
context->page_fault = paging32_page_fault;
context->gva_to_gpa = paging32_gva_to_gpa;
@@ -2201,6 +2266,7 @@ static int paging32_init_context(struct kvm_vcpu *vcpu)
static int paging32E_init_context(struct kvm_vcpu *vcpu)
{
+ reset_rsvds_bits_mask(vcpu, PT32E_ROOT_LEVEL);
return paging64_init_context_common(vcpu, PT32E_ROOT_LEVEL);
}
@@ -2221,12 +2287,15 @@ static int init_kvm_tdp_mmu(struct kvm_vcpu *vcpu)
context->gva_to_gpa = nonpaging_gva_to_gpa;
context->root_level = 0;
} else if (is_long_mode(vcpu)) {
+ reset_rsvds_bits_mask(vcpu, PT64_ROOT_LEVEL);
context->gva_to_gpa = paging64_gva_to_gpa;
context->root_level = PT64_ROOT_LEVEL;
} else if (is_pae(vcpu)) {
+ reset_rsvds_bits_mask(vcpu, PT32E_ROOT_LEVEL);
context->gva_to_gpa = paging64_gva_to_gpa;
context->root_level = PT32E_ROOT_LEVEL;
} else {
+ reset_rsvds_bits_mask(vcpu, PT32_ROOT_LEVEL);
context->gva_to_gpa = paging32_gva_to_gpa;
context->root_level = PT32_ROOT_LEVEL;
}
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index eae9499..09782a9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -123,6 +123,7 @@ static int FNAME(walk_addr)(struct guest_walker *walker,
gfn_t table_gfn;
unsigned index, pt_access, pte_access;
gpa_t pte_gpa;
+ int rsvd_fault = 0;
pgprintk("%s: addr %lx\n", __func__, addr);
walk:
@@ -157,6 +158,10 @@ walk:
if (!is_present_pte(pte))
goto not_present;
+ rsvd_fault = is_rsvd_bits_set(vcpu, pte, walker->level);
+ if (rsvd_fault)
+ goto access_error;
+
if (write_fault && !is_writeble_pte(pte))
if (user_fault || is_write_protection(vcpu))
goto access_error;
@@ -232,6 +237,8 @@ err:
walker->error_code |= PFERR_USER_MASK;
if (fetch_fault)
walker->error_code |= PFERR_FETCH_MASK;
+ if (rsvd_fault)
+ walker->error_code |= PFERR_RSVD_MASK;
return 0;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index ab61ea6..12ada0e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3013,6 +3013,16 @@ struct kvm_cpuid_entry2 *kvm_find_cpuid_entry(struct kvm_vcpu *vcpu,
return best;
}
+int cpuid_maxphyaddr(struct kvm_vcpu *vcpu)
+{
+ struct kvm_cpuid_entry2 *best;
+
+ best = kvm_find_cpuid_entry(vcpu, 0x80000008, 0);
+ if (best)
+ return best->eax & 0xff;
+ return 36;
+}
+
void kvm_emulate_cpuid(struct kvm_vcpu *vcpu)
{
u32 function, index;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 06/46] KVM: MMU: Use different shadows when EFER.NXE changes
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (4 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 05/46] KVM: MMU: Emulate #PF error code of reserved bits violation Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 07/46] KVM: remove pointless conditional before kfree() in lapic initialization Avi Kivity
` (39 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
A pte that is shadowed when the guest EFER.NXE=1 is not valid when
EFER.NXE=0; if bit 63 is set, the pte should cause a fault, and since the
shadow EFER always has NX enabled, this won't happen.
Fix by using a different shadow page table for different EFER.NXE bits. This
allows vcpus to run correctly with different values of EFER.NXE, and for
transitions on this bit to be handled correctly without requiring a full
flush.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 3 +++
2 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 548b97d..3fc4623 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -185,6 +185,7 @@ union kvm_mmu_page_role {
unsigned access:3;
unsigned invalid:1;
unsigned cr4_pge:1;
+ unsigned nxe:1;
};
};
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 12ada0e..637bb80 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -519,6 +519,9 @@ static void set_efer(struct kvm_vcpu *vcpu, u64 efer)
efer |= vcpu->arch.shadow_efer & EFER_LMA;
vcpu->arch.shadow_efer = efer;
+
+ vcpu->arch.mmu.base_role.nxe = (efer & EFER_NX) && !tdp_enabled;
+ kvm_mmu_reset_context(vcpu);
}
void kvm_enable_efer_bits(u64 mask)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 07/46] KVM: remove pointless conditional before kfree() in lapic initialization
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (5 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 06/46] KVM: MMU: Use different shadows when EFER.NXE changes Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 08/46] KVM: VMX: Clean up Flex Priority related Avi Kivity
` (38 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Wei Yongjun <yjwei@cn.fujitsu.com>
Remove pointless conditional before kfree().
Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 637bb80..34516bf 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1588,8 +1588,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
r = -EINVAL;
}
out:
- if (lapic)
- kfree(lapic);
+ kfree(lapic);
return r;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 08/46] KVM: VMX: Clean up Flex Priority related
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (6 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 07/46] KVM: remove pointless conditional before kfree() in lapic initialization Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 09/46] KVM: VMX: Fix feature testing Avi Kivity
` (37 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Sheng Yang <sheng@linux.intel.com>
And clean paranthes on returns.
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 47 ++++++++++++++++++++++++++++++-----------------
1 files changed, 30 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index aba41ae..1caa1fc 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -216,61 +216,69 @@ static inline int is_external_interrupt(u32 intr_info)
static inline int cpu_has_vmx_msr_bitmap(void)
{
- return (vmcs_config.cpu_based_exec_ctrl & CPU_BASED_USE_MSR_BITMAPS);
+ return vmcs_config.cpu_based_exec_ctrl & CPU_BASED_USE_MSR_BITMAPS;
}
static inline int cpu_has_vmx_tpr_shadow(void)
{
- return (vmcs_config.cpu_based_exec_ctrl & CPU_BASED_TPR_SHADOW);
+ return vmcs_config.cpu_based_exec_ctrl & CPU_BASED_TPR_SHADOW;
}
static inline int vm_need_tpr_shadow(struct kvm *kvm)
{
- return ((cpu_has_vmx_tpr_shadow()) && (irqchip_in_kernel(kvm)));
+ return (cpu_has_vmx_tpr_shadow()) && (irqchip_in_kernel(kvm));
}
static inline int cpu_has_secondary_exec_ctrls(void)
{
- return (vmcs_config.cpu_based_exec_ctrl &
- CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
+ return vmcs_config.cpu_based_exec_ctrl &
+ CPU_BASED_ACTIVATE_SECONDARY_CONTROLS;
}
static inline bool cpu_has_vmx_virtualize_apic_accesses(void)
{
- return flexpriority_enabled;
+ return vmcs_config.cpu_based_2nd_exec_ctrl &
+ SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES;
+}
+
+static inline bool cpu_has_vmx_flexpriority(void)
+{
+ return cpu_has_vmx_tpr_shadow() &&
+ cpu_has_vmx_virtualize_apic_accesses();
}
static inline int cpu_has_vmx_invept_individual_addr(void)
{
- return (!!(vmx_capability.ept & VMX_EPT_EXTENT_INDIVIDUAL_BIT));
+ return !!(vmx_capability.ept & VMX_EPT_EXTENT_INDIVIDUAL_BIT);
}
static inline int cpu_has_vmx_invept_context(void)
{
- return (!!(vmx_capability.ept & VMX_EPT_EXTENT_CONTEXT_BIT));
+ return !!(vmx_capability.ept & VMX_EPT_EXTENT_CONTEXT_BIT);
}
static inline int cpu_has_vmx_invept_global(void)
{
- return (!!(vmx_capability.ept & VMX_EPT_EXTENT_GLOBAL_BIT));
+ return !!(vmx_capability.ept & VMX_EPT_EXTENT_GLOBAL_BIT);
}
static inline int cpu_has_vmx_ept(void)
{
- return (vmcs_config.cpu_based_2nd_exec_ctrl &
- SECONDARY_EXEC_ENABLE_EPT);
+ return vmcs_config.cpu_based_2nd_exec_ctrl &
+ SECONDARY_EXEC_ENABLE_EPT;
}
static inline int vm_need_virtualize_apic_accesses(struct kvm *kvm)
{
- return ((cpu_has_vmx_virtualize_apic_accesses()) &&
- (irqchip_in_kernel(kvm)));
+ return flexpriority_enabled &&
+ (cpu_has_vmx_virtualize_apic_accesses()) &&
+ (irqchip_in_kernel(kvm));
}
static inline int cpu_has_vmx_vpid(void)
{
- return (vmcs_config.cpu_based_2nd_exec_ctrl &
- SECONDARY_EXEC_ENABLE_VPID);
+ return vmcs_config.cpu_based_2nd_exec_ctrl &
+ SECONDARY_EXEC_ENABLE_VPID;
}
static inline int cpu_has_virtual_nmis(void)
@@ -278,6 +286,11 @@ static inline int cpu_has_virtual_nmis(void)
return vmcs_config.pin_based_exec_ctrl & PIN_BASED_VIRTUAL_NMIS;
}
+static inline bool report_flexpriority(void)
+{
+ return flexpriority_enabled;
+}
+
static int __find_msr_index(struct vcpu_vmx *vmx, u32 msr)
{
int i;
@@ -1201,7 +1214,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
if (!cpu_has_vmx_ept())
enable_ept = 0;
- if (!(vmcs_config.cpu_based_2nd_exec_ctrl & SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES))
+ if (!cpu_has_vmx_flexpriority())
flexpriority_enabled = 0;
min = 0;
@@ -3655,7 +3668,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
.check_processor_compatibility = vmx_check_processor_compat,
.hardware_enable = hardware_enable,
.hardware_disable = hardware_disable,
- .cpu_has_accelerated_tpr = cpu_has_vmx_virtualize_apic_accesses,
+ .cpu_has_accelerated_tpr = report_flexpriority,
.vcpu_create = vmx_create_vcpu,
.vcpu_free = vmx_free_vcpu,
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 09/46] KVM: VMX: Fix feature testing
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (7 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 08/46] KVM: VMX: Clean up Flex Priority related Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 10/46] KVM: Use rsvd_bits_mask in load_pdptrs() Avi Kivity
` (36 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Sheng Yang <sheng@linux.intel.com>
The testing of feature is too early now, before vmcs_config complete initialization.
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 18 +++++++++---------
1 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 1caa1fc..7d7b0d6 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -1208,15 +1208,6 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
vmx_capability.ept, vmx_capability.vpid);
}
- if (!cpu_has_vmx_vpid())
- enable_vpid = 0;
-
- if (!cpu_has_vmx_ept())
- enable_ept = 0;
-
- if (!cpu_has_vmx_flexpriority())
- flexpriority_enabled = 0;
-
min = 0;
#ifdef CONFIG_X86_64
min |= VM_EXIT_HOST_ADDR_SPACE_SIZE;
@@ -1320,6 +1311,15 @@ static __init int hardware_setup(void)
if (boot_cpu_has(X86_FEATURE_NX))
kvm_enable_efer_bits(EFER_NX);
+ if (!cpu_has_vmx_vpid())
+ enable_vpid = 0;
+
+ if (!cpu_has_vmx_ept())
+ enable_ept = 0;
+
+ if (!cpu_has_vmx_flexpriority())
+ flexpriority_enabled = 0;
+
return alloc_kvm_area();
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 10/46] KVM: Use rsvd_bits_mask in load_pdptrs()
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (8 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 09/46] KVM: VMX: Fix feature testing Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 11/46] KVM: VMX: Fix handling of a fault during NMI unblocked due to IRET Avi Kivity
` (35 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Dong, Eddie <eddie.dong@intel.com>
Also remove bit 5-6 from rsvd_bits_mask per latest SDM.
Signed-off-by: Eddie Dong <Eddie.Dong@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 8 +++-----
arch/x86/kvm/mmu.h | 5 +++++
arch/x86/kvm/x86.c | 3 ++-
3 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 24f5a57..5baf539 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -225,11 +225,6 @@ static int is_nx(struct kvm_vcpu *vcpu)
return vcpu->arch.shadow_efer & EFER_NX;
}
-static int is_present_pte(unsigned long pte)
-{
- return pte & PT_PRESENT_MASK;
-}
-
static int is_shadow_present_pte(u64 pte)
{
return pte != shadow_trap_nonpresent_pte
@@ -2195,6 +2190,9 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
context->rsvd_bits_mask[1][0] = ~0ull;
break;
case PT32E_ROOT_LEVEL:
+ context->rsvd_bits_mask[0][2] =
+ rsvd_bits(maxphyaddr, 63) |
+ rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index eaab214..3494a2f 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -75,4 +75,9 @@ static inline int is_paging(struct kvm_vcpu *vcpu)
return vcpu->arch.cr0 & X86_CR0_PG;
}
+static inline int is_present_pte(unsigned long pte)
+{
+ return pte & PT_PRESENT_MASK;
+}
+
#endif
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 34516bf..7be18d4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -234,7 +234,8 @@ int load_pdptrs(struct kvm_vcpu *vcpu, unsigned long cr3)
goto out;
}
for (i = 0; i < ARRAY_SIZE(pdpte); ++i) {
- if ((pdpte[i] & 1) && (pdpte[i] & 0xfffffff0000001e6ull)) {
+ if (is_present_pte(pdpte[i]) &&
+ (pdpte[i] & vcpu->arch.mmu.rsvd_bits_mask[0][2])) {
ret = 0;
goto out;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 11/46] KVM: VMX: Fix handling of a fault during NMI unblocked due to IRET
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (9 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 10/46] KVM: Use rsvd_bits_mask in load_pdptrs() Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 12/46] KVM: VMX: Rewrite vmx_complete_interrupt()'s twisted maze of if() statements Avi Kivity
` (34 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Bit 12 is undefined in any of the following cases:
If the VM exit sets the valid bit in the IDT-vectoring information field.
If the VM exit is due to a double fault.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 17 +++++++++++------
1 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 7d7b0d6..631f9b7 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3272,36 +3272,41 @@ static void update_tpr_threshold(struct kvm_vcpu *vcpu)
static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
{
u32 exit_intr_info;
- u32 idt_vectoring_info;
+ u32 idt_vectoring_info = vmx->idt_vectoring_info;
bool unblock_nmi;
u8 vector;
int type;
bool idtv_info_valid;
u32 error;
+ idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
if (cpu_has_virtual_nmis()) {
unblock_nmi = (exit_intr_info & INTR_INFO_UNBLOCK_NMI) != 0;
vector = exit_intr_info & INTR_INFO_VECTOR_MASK;
/*
- * SDM 3: 25.7.1.2
+ * SDM 3: 27.7.1.2 (September 2008)
* Re-set bit "block by NMI" before VM entry if vmexit caused by
* a guest IRET fault.
+ * SDM 3: 23.2.2 (September 2008)
+ * Bit 12 is undefined in any of the following cases:
+ * If the VM exit sets the valid bit in the IDT-vectoring
+ * information field.
+ * If the VM exit is due to a double fault.
*/
- if (unblock_nmi && vector != DF_VECTOR)
+ if ((exit_intr_info & INTR_INFO_VALID_MASK) && unblock_nmi &&
+ vector != DF_VECTOR && !idtv_info_valid)
vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
GUEST_INTR_STATE_NMI);
} else if (unlikely(vmx->soft_vnmi_blocked))
vmx->vnmi_blocked_time +=
ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
- idt_vectoring_info = vmx->idt_vectoring_info;
- idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
vector = idt_vectoring_info & VECTORING_INFO_VECTOR_MASK;
type = idt_vectoring_info & VECTORING_INFO_TYPE_MASK;
if (vmx->vcpu.arch.nmi_injected) {
/*
- * SDM 3: 25.7.1.2
+ * SDM 3: 27.7.1.2 (September 2008)
* Clear bit "block by NMI" before VM entry if a NMI delivery
* faulted.
*/
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 12/46] KVM: VMX: Rewrite vmx_complete_interrupt()'s twisted maze of if() statements
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (10 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 11/46] KVM: VMX: Fix handling of a fault during NMI unblocked due to IRET Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 13/46] KVM: VMX: Do not zero idt_vectoring_info in vmx_complete_interrupts() Avi Kivity
` (33 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
...with a more straightforward switch().
Also fix a bug when NMI could be dropped on exit. Although this should
never happen in practice, since NMIs can only be injected, never triggered
internally by the guest like exceptions.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 43 +++++++++++++++++++++++++------------------
1 files changed, 25 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 631f9b7..577aa95 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3277,7 +3277,6 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
u8 vector;
int type;
bool idtv_info_valid;
- u32 error;
idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK;
exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO);
@@ -3302,34 +3301,42 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
vmx->vnmi_blocked_time +=
ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time));
+ vmx->vcpu.arch.nmi_injected = false;
+ kvm_clear_exception_queue(&vmx->vcpu);
+ kvm_clear_interrupt_queue(&vmx->vcpu);
+
+ if (!idtv_info_valid)
+ return;
+
vector = idt_vectoring_info & VECTORING_INFO_VECTOR_MASK;
type = idt_vectoring_info & VECTORING_INFO_TYPE_MASK;
- if (vmx->vcpu.arch.nmi_injected) {
+
+ switch(type) {
+ case INTR_TYPE_NMI_INTR:
+ vmx->vcpu.arch.nmi_injected = true;
/*
* SDM 3: 27.7.1.2 (September 2008)
- * Clear bit "block by NMI" before VM entry if a NMI delivery
- * faulted.
+ * Clear bit "block by NMI" before VM entry if a NMI
+ * delivery faulted.
*/
- if (idtv_info_valid && type == INTR_TYPE_NMI_INTR)
- vmcs_clear_bits(GUEST_INTERRUPTIBILITY_INFO,
- GUEST_INTR_STATE_NMI);
- else
- vmx->vcpu.arch.nmi_injected = false;
- }
- kvm_clear_exception_queue(&vmx->vcpu);
- if (idtv_info_valid && (type == INTR_TYPE_HARD_EXCEPTION ||
- type == INTR_TYPE_SOFT_EXCEPTION)) {
+ vmcs_clear_bits(GUEST_INTERRUPTIBILITY_INFO,
+ GUEST_INTR_STATE_NMI);
+ break;
+ case INTR_TYPE_HARD_EXCEPTION:
+ case INTR_TYPE_SOFT_EXCEPTION:
if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK) {
- error = vmcs_read32(IDT_VECTORING_ERROR_CODE);
- kvm_queue_exception_e(&vmx->vcpu, vector, error);
+ u32 err = vmcs_read32(IDT_VECTORING_ERROR_CODE);
+ kvm_queue_exception_e(&vmx->vcpu, vector, err);
} else
kvm_queue_exception(&vmx->vcpu, vector);
vmx->idt_vectoring_info = 0;
- }
- kvm_clear_interrupt_queue(&vmx->vcpu);
- if (idtv_info_valid && type == INTR_TYPE_EXT_INTR) {
+ break;
+ case INTR_TYPE_EXT_INTR:
kvm_queue_interrupt(&vmx->vcpu, vector);
vmx->idt_vectoring_info = 0;
+ break;
+ default:
+ break;
}
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 13/46] KVM: VMX: Do not zero idt_vectoring_info in vmx_complete_interrupts().
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (11 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 12/46] KVM: VMX: Rewrite vmx_complete_interrupt()'s twisted maze of if() statements Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 14/46] KVM: Fix task switch back link handling Avi Kivity
` (32 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
We will need it later in task_switch().
Code in handle_exception() is dead. is_external_interrupt(vect_info)
will always be false since idt_vectoring_info is zeroed in
vmx_complete_interrupts().
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 7 -------
1 files changed, 0 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 577aa95..e4ad9d3 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2626,11 +2626,6 @@ static int handle_exception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
printk(KERN_ERR "%s: unexpected, vectoring info 0x%x "
"intr info 0x%x\n", __func__, vect_info, intr_info);
- if (!irqchip_in_kernel(vcpu->kvm) && is_external_interrupt(vect_info)) {
- int irq = vect_info & VECTORING_INFO_VECTOR_MASK;
- kvm_push_irq(vcpu, irq);
- }
-
if ((intr_info & INTR_INFO_INTR_TYPE_MASK) == INTR_TYPE_NMI_INTR)
return 1; /* already handled by vmx_vcpu_run() */
@@ -3329,11 +3324,9 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
kvm_queue_exception_e(&vmx->vcpu, vector, err);
} else
kvm_queue_exception(&vmx->vcpu, vector);
- vmx->idt_vectoring_info = 0;
break;
case INTR_TYPE_EXT_INTR:
kvm_queue_interrupt(&vmx->vcpu, vector);
- vmx->idt_vectoring_info = 0;
break;
default:
break;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 14/46] KVM: Fix task switch back link handling.
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (12 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 13/46] KVM: VMX: Do not zero idt_vectoring_info in vmx_complete_interrupts() Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 15/46] KVM: Fix unneeded instruction skipping during task switching Avi Kivity
` (31 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Back link is written to a wrong TSS now.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 40 ++++++++++++++++++++++++++++++++--------
1 files changed, 32 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7be18d4..157d54b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3713,7 +3713,6 @@ static void save_state_to_tss32(struct kvm_vcpu *vcpu,
tss->fs = get_segment_selector(vcpu, VCPU_SREG_FS);
tss->gs = get_segment_selector(vcpu, VCPU_SREG_GS);
tss->ldt_selector = get_segment_selector(vcpu, VCPU_SREG_LDTR);
- tss->prev_task_link = get_segment_selector(vcpu, VCPU_SREG_TR);
}
static int load_state_from_tss32(struct kvm_vcpu *vcpu,
@@ -3810,8 +3809,8 @@ static int load_state_from_tss16(struct kvm_vcpu *vcpu,
}
static int kvm_task_switch_16(struct kvm_vcpu *vcpu, u16 tss_selector,
- u32 old_tss_base,
- struct desc_struct *nseg_desc)
+ u16 old_tss_sel, u32 old_tss_base,
+ struct desc_struct *nseg_desc)
{
struct tss_segment_16 tss_segment_16;
int ret = 0;
@@ -3830,6 +3829,16 @@ static int kvm_task_switch_16(struct kvm_vcpu *vcpu, u16 tss_selector,
&tss_segment_16, sizeof tss_segment_16))
goto out;
+ if (old_tss_sel != 0xffff) {
+ tss_segment_16.prev_task_link = old_tss_sel;
+
+ if (kvm_write_guest(vcpu->kvm,
+ get_tss_base_addr(vcpu, nseg_desc),
+ &tss_segment_16.prev_task_link,
+ sizeof tss_segment_16.prev_task_link))
+ goto out;
+ }
+
if (load_state_from_tss16(vcpu, &tss_segment_16))
goto out;
@@ -3839,7 +3848,7 @@ out:
}
static int kvm_task_switch_32(struct kvm_vcpu *vcpu, u16 tss_selector,
- u32 old_tss_base,
+ u16 old_tss_sel, u32 old_tss_base,
struct desc_struct *nseg_desc)
{
struct tss_segment_32 tss_segment_32;
@@ -3859,6 +3868,16 @@ static int kvm_task_switch_32(struct kvm_vcpu *vcpu, u16 tss_selector,
&tss_segment_32, sizeof tss_segment_32))
goto out;
+ if (old_tss_sel != 0xffff) {
+ tss_segment_32.prev_task_link = old_tss_sel;
+
+ if (kvm_write_guest(vcpu->kvm,
+ get_tss_base_addr(vcpu, nseg_desc),
+ &tss_segment_32.prev_task_link,
+ sizeof tss_segment_32.prev_task_link))
+ goto out;
+ }
+
if (load_state_from_tss32(vcpu, &tss_segment_32))
goto out;
@@ -3914,12 +3933,17 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason)
kvm_x86_ops->skip_emulated_instruction(vcpu);
+ /* set back link to prev task only if NT bit is set in eflags
+ note that old_tss_sel is not used afetr this point */
+ if (reason != TASK_SWITCH_CALL && reason != TASK_SWITCH_GATE)
+ old_tss_sel = 0xffff;
+
if (nseg_desc.type & 8)
- ret = kvm_task_switch_32(vcpu, tss_selector, old_tss_base,
- &nseg_desc);
+ ret = kvm_task_switch_32(vcpu, tss_selector, old_tss_sel,
+ old_tss_base, &nseg_desc);
else
- ret = kvm_task_switch_16(vcpu, tss_selector, old_tss_base,
- &nseg_desc);
+ ret = kvm_task_switch_16(vcpu, tss_selector, old_tss_sel,
+ old_tss_base, &nseg_desc);
if (reason == TASK_SWITCH_CALL || reason == TASK_SWITCH_GATE) {
u32 eflags = kvm_x86_ops->get_rflags(vcpu);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 15/46] KVM: Fix unneeded instruction skipping during task switching.
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (13 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 14/46] KVM: Fix task switch back link handling Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 16/46] KVM: MMU: Discard reserved bits checking on PDE bit 7-8 Avi Kivity
` (30 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
There is no need to skip instruction if the reason for a task switch
is a task gate in IDT and access to it is caused by an external even.
The problem is currently solved only for VMX since there is no reliable
way to skip an instruction in SVM. We should emulate it instead.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/svm.h | 1 +
arch/x86/kvm/svm.c | 25 ++++++++++++++++++-------
arch/x86/kvm/vmx.c | 38 ++++++++++++++++++++++++++++----------
arch/x86/kvm/x86.c | 5 ++++-
4 files changed, 51 insertions(+), 18 deletions(-)
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 82ada75..85574b7 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -225,6 +225,7 @@ struct __attribute__ ((__packed__)) vmcb {
#define SVM_EVTINJ_VALID_ERR (1 << 11)
#define SVM_EXITINTINFO_VEC_MASK SVM_EVTINJ_VEC_MASK
+#define SVM_EXITINTINFO_TYPE_MASK SVM_EVTINJ_TYPE_MASK
#define SVM_EXITINTINFO_TYPE_INTR SVM_EVTINJ_TYPE_INTR
#define SVM_EXITINTINFO_TYPE_NMI SVM_EVTINJ_TYPE_NMI
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index de74104..bba67b7 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1825,17 +1825,28 @@ static int task_switch_interception(struct vcpu_svm *svm,
struct kvm_run *kvm_run)
{
u16 tss_selector;
+ int reason;
+ int int_type = svm->vmcb->control.exit_int_info &
+ SVM_EXITINTINFO_TYPE_MASK;
tss_selector = (u16)svm->vmcb->control.exit_info_1;
+
if (svm->vmcb->control.exit_info_2 &
(1ULL << SVM_EXITINFOSHIFT_TS_REASON_IRET))
- return kvm_task_switch(&svm->vcpu, tss_selector,
- TASK_SWITCH_IRET);
- if (svm->vmcb->control.exit_info_2 &
- (1ULL << SVM_EXITINFOSHIFT_TS_REASON_JMP))
- return kvm_task_switch(&svm->vcpu, tss_selector,
- TASK_SWITCH_JMP);
- return kvm_task_switch(&svm->vcpu, tss_selector, TASK_SWITCH_CALL);
+ reason = TASK_SWITCH_IRET;
+ else if (svm->vmcb->control.exit_info_2 &
+ (1ULL << SVM_EXITINFOSHIFT_TS_REASON_JMP))
+ reason = TASK_SWITCH_JMP;
+ else if (svm->vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID)
+ reason = TASK_SWITCH_GATE;
+ else
+ reason = TASK_SWITCH_CALL;
+
+
+ if (reason != TASK_SWITCH_GATE || int_type == SVM_EXITINTINFO_TYPE_SOFT)
+ skip_emulated_instruction(&svm->vcpu);
+
+ return kvm_task_switch(&svm->vcpu, tss_selector, reason);
}
static int cpuid_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index e4ad9d3..c6997c0 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3038,22 +3038,40 @@ static int handle_task_switch(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
struct vcpu_vmx *vmx = to_vmx(vcpu);
unsigned long exit_qualification;
u16 tss_selector;
- int reason;
+ int reason, type, idt_v;
+
+ idt_v = (vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK);
+ type = (vmx->idt_vectoring_info & VECTORING_INFO_TYPE_MASK);
exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
reason = (u32)exit_qualification >> 30;
- if (reason == TASK_SWITCH_GATE && vmx->vcpu.arch.nmi_injected &&
- (vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
- (vmx->idt_vectoring_info & VECTORING_INFO_TYPE_MASK)
- == INTR_TYPE_NMI_INTR) {
- vcpu->arch.nmi_injected = false;
- if (cpu_has_virtual_nmis())
- vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
- GUEST_INTR_STATE_NMI);
+ if (reason == TASK_SWITCH_GATE && idt_v) {
+ switch (type) {
+ case INTR_TYPE_NMI_INTR:
+ vcpu->arch.nmi_injected = false;
+ if (cpu_has_virtual_nmis())
+ vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO,
+ GUEST_INTR_STATE_NMI);
+ break;
+ case INTR_TYPE_EXT_INTR:
+ kvm_clear_interrupt_queue(vcpu);
+ break;
+ case INTR_TYPE_HARD_EXCEPTION:
+ case INTR_TYPE_SOFT_EXCEPTION:
+ kvm_clear_exception_queue(vcpu);
+ break;
+ default:
+ break;
+ }
}
tss_selector = exit_qualification;
+ if (!idt_v || (type != INTR_TYPE_HARD_EXCEPTION &&
+ type != INTR_TYPE_EXT_INTR &&
+ type != INTR_TYPE_NMI_INTR))
+ skip_emulated_instruction(vcpu);
+
if (!kvm_task_switch(vcpu, tss_selector, reason))
return 0;
@@ -3306,7 +3324,7 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
vector = idt_vectoring_info & VECTORING_INFO_VECTOR_MASK;
type = idt_vectoring_info & VECTORING_INFO_TYPE_MASK;
- switch(type) {
+ switch (type) {
case INTR_TYPE_NMI_INTR:
vmx->vcpu.arch.nmi_injected = true;
/*
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 157d54b..95f1369 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3931,7 +3931,10 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason)
kvm_x86_ops->set_rflags(vcpu, eflags & ~X86_EFLAGS_NT);
}
- kvm_x86_ops->skip_emulated_instruction(vcpu);
+ /* set back link to prev task only if NT bit is set in eflags
+ note that old_tss_sel is not used afetr this point */
+ if (reason != TASK_SWITCH_CALL && reason != TASK_SWITCH_GATE)
+ old_tss_sel = 0xffff;
/* set back link to prev task only if NT bit is set in eflags
note that old_tss_sel is not used afetr this point */
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 16/46] KVM: MMU: Discard reserved bits checking on PDE bit 7-8
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (14 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 15/46] KVM: Fix unneeded instruction skipping during task switching Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 17/46] KVM: x86 emulator: fix call near emulation Avi Kivity
` (29 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Sheng Yang <sheng@linux.intel.com>
1. It's related to a Linux kernel bug which fixed by Ingo on
07a66d7c53a538e1a9759954a82bb6c07365eff9. The original code exists for quite a
long time, and it would convert a PDE for large page into a normal PDE. But it
fail to fit normal PDE well. With the code before Ingo's fix, the kernel would
fall reserved bit checking with bit 8 - the remaining global bit of PTE. So the
kernel would receive a double-fault.
2. After discussion, we decide to discard PDE bit 7-8 reserved checking for now.
For this marked as reserved in SDM, but didn't checked by the processor in
fact...
Signed-off-by: Sheng Yang <sheng@linux.intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5baf539..409d08e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2194,7 +2194,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
rsvd_bits(maxphyaddr, 63) |
rsvd_bits(7, 8) | rsvd_bits(1, 2); /* PDPTE */
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
- rsvd_bits(maxphyaddr, 62); /* PDE */
+ rsvd_bits(maxphyaddr, 62); /* PDE */
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 62); /* PTE */
context->rsvd_bits_mask[1][1] = exb_bit_rsvd |
@@ -2208,13 +2208,14 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level)
context->rsvd_bits_mask[0][2] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 51) | rsvd_bits(7, 8);
context->rsvd_bits_mask[0][1] = exb_bit_rsvd |
- rsvd_bits(maxphyaddr, 51) | rsvd_bits(7, 8);
+ rsvd_bits(maxphyaddr, 51);
context->rsvd_bits_mask[0][0] = exb_bit_rsvd |
rsvd_bits(maxphyaddr, 51);
context->rsvd_bits_mask[1][3] = context->rsvd_bits_mask[0][3];
context->rsvd_bits_mask[1][2] = context->rsvd_bits_mask[0][2];
context->rsvd_bits_mask[1][1] = exb_bit_rsvd |
- rsvd_bits(maxphyaddr, 51) | rsvd_bits(13, 20);
+ rsvd_bits(maxphyaddr, 51) |
+ rsvd_bits(13, 20); /* large page */
context->rsvd_bits_mask[1][0] = ~0ull;
break;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 17/46] KVM: x86 emulator: fix call near emulation
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (15 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 16/46] KVM: MMU: Discard reserved bits checking on PDE bit 7-8 Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 18/46] KVM: ia64: make kvm depend on CONFIG_MODULES Avi Kivity
` (28 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
The length of pushed on to the stack return address depends on operand
size not address size.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index ca91749..d7c9f6f 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -1792,7 +1792,6 @@ special_insn:
}
c->src.val = (unsigned long) c->eip;
jmp_rel(c, rel);
- c->op_bytes = c->ad_bytes;
emulate_push(ctxt);
break;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 18/46] KVM: ia64: make kvm depend on CONFIG_MODULES.
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (16 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 17/46] KVM: x86 emulator: fix call near emulation Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 19/46] KVM: PIT: fix count read and mode 0 handling Avi Kivity
` (27 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Zhang, Xiantao <xiantao.zhang@intel.com>
Since kvm-intel modue can't be built-in, make kvm depend on
CONFIG_MODULES.
Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/Kconfig | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/ia64/kvm/Kconfig b/arch/ia64/kvm/Kconfig
index 0a2d6b8..64d5209 100644
--- a/arch/ia64/kvm/Kconfig
+++ b/arch/ia64/kvm/Kconfig
@@ -23,7 +23,7 @@ if VIRTUALIZATION
config KVM
tristate "Kernel-based Virtual Machine (KVM) support"
- depends on HAVE_KVM && EXPERIMENTAL
+ depends on HAVE_KVM && MODULES && EXPERIMENTAL
# for device assignment:
depends on PCI
select PREEMPT_NOTIFIERS
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 19/46] KVM: PIT: fix count read and mode 0 handling
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (17 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 18/46] KVM: ia64: make kvm depend on CONFIG_MODULES Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 20/46] KVM: Make kvm header C++ friendly Avi Kivity
` (26 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Marcelo Tosatti <mtosatti@redhat.com>
Commit 46ee278652f4cbd51013471b64c7897ba9bcd1b1 causes Solaris 10
to hang on boot.
Assuming that PIT counter reads should return 0 for an expired timer
is wrong: when it is active, the counter never stops (see comment on
__kpit_elapsed).
Also arm a one shot timer for mode 0.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/i8254.c | 26 +++++++++++++++-----------
1 files changed, 15 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index cf09bb6..4d6f0d2 100644
--- a/arch/x86/kvm/i8254.c
+++ b/arch/x86/kvm/i8254.c
@@ -104,13 +104,18 @@ static s64 __kpit_elapsed(struct kvm *kvm)
ktime_t remaining;
struct kvm_kpit_state *ps = &kvm->arch.vpit->pit_state;
+ /*
+ * The Counter does not stop when it reaches zero. In
+ * Modes 0, 1, 4, and 5 the Counter ``wraps around'' to
+ * the highest count, either FFFF hex for binary counting
+ * or 9999 for BCD counting, and continues counting.
+ * Modes 2 and 3 are periodic; the Counter reloads
+ * itself with the initial count and continues counting
+ * from there.
+ */
remaining = hrtimer_expires_remaining(&ps->pit_timer.timer);
- if (ktime_to_ns(remaining) < 0)
- remaining = ktime_set(0, 0);
-
- elapsed = ps->pit_timer.period;
- if (ktime_to_ns(remaining) <= ps->pit_timer.period)
- elapsed = ps->pit_timer.period - ktime_to_ns(remaining);
+ elapsed = ps->pit_timer.period - ktime_to_ns(remaining);
+ elapsed = mod_64(elapsed, ps->pit_timer.period);
return elapsed;
}
@@ -280,7 +285,7 @@ static void create_pit_timer(struct kvm_kpit_state *ps, u32 val, int is_period)
/* TODO The new value only affected after the retriggered */
hrtimer_cancel(&pt->timer);
- pt->period = (is_period == 0) ? 0 : interval;
+ pt->period = interval;
ps->is_periodic = is_period;
pt->timer.function = kvm_timer_fn;
@@ -304,10 +309,8 @@ static void pit_load_count(struct kvm *kvm, int channel, u32 val)
pr_debug("pit: load_count val is %d, channel is %d\n", val, channel);
/*
- * Though spec said the state of 8254 is undefined after power-up,
- * seems some tricky OS like Windows XP depends on IRQ0 interrupt
- * when booting up.
- * So here setting initialize rate for it, and not a specific number
+ * The largest possible initial count is 0; this is equivalent
+ * to 216 for binary counting and 104 for BCD counting.
*/
if (val == 0)
val = 0x10000;
@@ -322,6 +325,7 @@ static void pit_load_count(struct kvm *kvm, int channel, u32 val)
/* Two types of timer
* mode 1 is one shot, mode 2 is period, otherwise del timer */
switch (ps->channels[0].mode) {
+ case 0:
case 1:
/* FIXME: enhance mode 4 precision */
case 4:
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 20/46] KVM: Make kvm header C++ friendly
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (18 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 19/46] KVM: PIT: fix count read and mode 0 handling Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 21/46] KVM: MMU: remove global page optimization logic Avi Kivity
` (25 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: nathan binkert <nate@binkert.org>
Two things needed fixing: 1) g++ does not allow a named structure type
within an anonymous union and 2) Avoid name clash between two padding
fields within the same struct by giving them different names as is
done elsewhere in the header.
Signed-off-by: Nathan Binkert <nate@binkert.org>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
include/linux/kvm.h | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/include/linux/kvm.h b/include/linux/kvm.h
index 644e3a9..3db5d8d 100644
--- a/include/linux/kvm.h
+++ b/include/linux/kvm.h
@@ -119,7 +119,7 @@ struct kvm_run {
__u32 error_code;
} ex;
/* KVM_EXIT_IO */
- struct kvm_io {
+ struct {
#define KVM_EXIT_IO_IN 0
#define KVM_EXIT_IO_OUT 1
__u8 direction;
@@ -224,10 +224,10 @@ struct kvm_interrupt {
/* for KVM_GET_DIRTY_LOG */
struct kvm_dirty_log {
__u32 slot;
- __u32 padding;
+ __u32 padding1;
union {
void __user *dirty_bitmap; /* one bit per page */
- __u64 padding;
+ __u64 padding2;
};
};
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 21/46] KVM: MMU: remove global page optimization logic
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (19 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 20/46] KVM: Make kvm header C++ friendly Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 22/46] KVM: x86 emulator: Add decoding of 16bit second immediate argument Avi Kivity
` (24 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Marcelo Tosatti <mtosatti@redhat.com>
Complexity to fix it not worthwhile the gains, as discussed
in http://article.gmane.org/gmane.comp.emulators.kvm.devel/28649.
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 4 ---
arch/x86/kvm/mmu.c | 50 ++++----------------------------------
arch/x86/kvm/paging_tmpl.h | 6 +---
arch/x86/kvm/x86.c | 4 ---
4 files changed, 8 insertions(+), 56 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3fc4623..0e3a7c6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -213,7 +213,6 @@ struct kvm_mmu_page {
int multimapped; /* More than one parent_pte? */
int root_count; /* Currently serving as active root */
bool unsync;
- bool global;
unsigned int unsync_children;
union {
u64 *parent_pte; /* !multimapped */
@@ -395,7 +394,6 @@ struct kvm_arch{
*/
struct list_head active_mmu_pages;
struct list_head assigned_dev_head;
- struct list_head oos_global_pages;
struct iommu_domain *iommu_domain;
struct kvm_pic *vpic;
struct kvm_ioapic *vioapic;
@@ -425,7 +423,6 @@ struct kvm_vm_stat {
u32 mmu_recycled;
u32 mmu_cache_miss;
u32 mmu_unsync;
- u32 mmu_unsync_global;
u32 remote_tlb_flush;
u32 lpages;
};
@@ -640,7 +637,6 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu);
int kvm_mmu_load(struct kvm_vcpu *vcpu);
void kvm_mmu_unload(struct kvm_vcpu *vcpu);
void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu);
-void kvm_mmu_sync_global(struct kvm_vcpu *vcpu);
int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 409d08e..5b79afa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1075,18 +1075,10 @@ static struct kvm_mmu_page *kvm_mmu_lookup_page(struct kvm *kvm, gfn_t gfn)
return NULL;
}
-static void kvm_unlink_unsync_global(struct kvm *kvm, struct kvm_mmu_page *sp)
-{
- list_del(&sp->oos_link);
- --kvm->stat.mmu_unsync_global;
-}
-
static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
{
WARN_ON(!sp->unsync);
sp->unsync = 0;
- if (sp->global)
- kvm_unlink_unsync_global(kvm, sp);
--kvm->stat.mmu_unsync;
}
@@ -1249,7 +1241,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
pgprintk("%s: adding gfn %lx role %x\n", __func__, gfn, role.word);
sp->gfn = gfn;
sp->role = role;
- sp->global = 0;
hlist_add_head(&sp->hash_link, bucket);
if (!direct) {
if (rmap_write_protect(vcpu->kvm, gfn))
@@ -1647,11 +1638,7 @@ static int kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
++vcpu->kvm->stat.mmu_unsync;
sp->unsync = 1;
- if (sp->global) {
- list_add(&sp->oos_link, &vcpu->kvm->arch.oos_global_pages);
- ++vcpu->kvm->stat.mmu_unsync_global;
- } else
- kvm_mmu_mark_parents_unsync(vcpu, sp);
+ kvm_mmu_mark_parents_unsync(vcpu, sp);
mmu_convert_notrap(sp);
return 0;
@@ -1678,21 +1665,12 @@ static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
static int set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
unsigned pte_access, int user_fault,
int write_fault, int dirty, int largepage,
- int global, gfn_t gfn, pfn_t pfn, bool speculative,
+ gfn_t gfn, pfn_t pfn, bool speculative,
bool can_unsync)
{
u64 spte;
int ret = 0;
u64 mt_mask = shadow_mt_mask;
- struct kvm_mmu_page *sp = page_header(__pa(shadow_pte));
-
- if (!global && sp->global) {
- sp->global = 0;
- if (sp->unsync) {
- kvm_unlink_unsync_global(vcpu->kvm, sp);
- kvm_mmu_mark_parents_unsync(vcpu, sp);
- }
- }
/*
* We don't set the accessed bit, since we sometimes want to see
@@ -1766,8 +1744,8 @@ set_pte:
static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
unsigned pt_access, unsigned pte_access,
int user_fault, int write_fault, int dirty,
- int *ptwrite, int largepage, int global,
- gfn_t gfn, pfn_t pfn, bool speculative)
+ int *ptwrite, int largepage, gfn_t gfn,
+ pfn_t pfn, bool speculative)
{
int was_rmapped = 0;
int was_writeble = is_writeble_pte(*shadow_pte);
@@ -1796,7 +1774,7 @@ static void mmu_set_spte(struct kvm_vcpu *vcpu, u64 *shadow_pte,
was_rmapped = 1;
}
if (set_spte(vcpu, shadow_pte, pte_access, user_fault, write_fault,
- dirty, largepage, global, gfn, pfn, speculative, true)) {
+ dirty, largepage, gfn, pfn, speculative, true)) {
if (write_fault)
*ptwrite = 1;
kvm_x86_ops->tlb_flush(vcpu);
@@ -1844,7 +1822,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t v, int write,
|| (largepage && iterator.level == PT_DIRECTORY_LEVEL)) {
mmu_set_spte(vcpu, iterator.sptep, ACC_ALL, ACC_ALL,
0, write, 1, &pt_write,
- largepage, 0, gfn, pfn, false);
+ largepage, gfn, pfn, false);
++vcpu->stat.pf_fixed;
break;
}
@@ -2015,15 +1993,6 @@ static void mmu_sync_roots(struct kvm_vcpu *vcpu)
}
}
-static void mmu_sync_global(struct kvm_vcpu *vcpu)
-{
- struct kvm *kvm = vcpu->kvm;
- struct kvm_mmu_page *sp, *n;
-
- list_for_each_entry_safe(sp, n, &kvm->arch.oos_global_pages, oos_link)
- kvm_sync_page(vcpu, sp);
-}
-
void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
{
spin_lock(&vcpu->kvm->mmu_lock);
@@ -2031,13 +2000,6 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu)
spin_unlock(&vcpu->kvm->mmu_lock);
}
-void kvm_mmu_sync_global(struct kvm_vcpu *vcpu)
-{
- spin_lock(&vcpu->kvm->mmu_lock);
- mmu_sync_global(vcpu);
- spin_unlock(&vcpu->kvm->mmu_lock);
-}
-
static gpa_t nonpaging_gva_to_gpa(struct kvm_vcpu *vcpu, gva_t vaddr)
{
return vaddr;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 09782a9..258e459 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -268,8 +268,7 @@ static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *page,
kvm_get_pfn(pfn);
mmu_set_spte(vcpu, spte, page->role.access, pte_access, 0, 0,
gpte & PT_DIRTY_MASK, NULL, largepage,
- gpte & PT_GLOBAL_MASK, gpte_to_gfn(gpte),
- pfn, true);
+ gpte_to_gfn(gpte), pfn, true);
}
/*
@@ -303,7 +302,6 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
user_fault, write_fault,
gw->ptes[gw->level-1] & PT_DIRTY_MASK,
ptwrite, largepage,
- gw->ptes[gw->level-1] & PT_GLOBAL_MASK,
gw->gfn, pfn, false);
break;
}
@@ -592,7 +590,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
nr_present++;
pte_access = sp->role.access & FNAME(gpte_access)(vcpu, gpte);
set_spte(vcpu, &sp->spt[i], pte_access, 0, 0,
- is_dirty_pte(gpte), 0, gpte & PT_GLOBAL_MASK, gfn,
+ is_dirty_pte(gpte), 0, gfn,
spte_to_pfn(sp->spt[i]), true, false);
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 95f1369..9b89d9b 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -108,7 +108,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "mmu_recycled", VM_STAT(mmu_recycled) },
{ "mmu_cache_miss", VM_STAT(mmu_cache_miss) },
{ "mmu_unsync", VM_STAT(mmu_unsync) },
- { "mmu_unsync_global", VM_STAT(mmu_unsync_global) },
{ "remote_tlb_flush", VM_STAT(remote_tlb_flush) },
{ "largepages", VM_STAT(lpages) },
{ NULL }
@@ -322,7 +321,6 @@ void kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0)
kvm_x86_ops->set_cr0(vcpu, cr0);
vcpu->arch.cr0 = cr0;
- kvm_mmu_sync_global(vcpu);
kvm_mmu_reset_context(vcpu);
return;
}
@@ -367,7 +365,6 @@ void kvm_set_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
kvm_x86_ops->set_cr4(vcpu, cr4);
vcpu->arch.cr4 = cr4;
vcpu->arch.mmu.base_role.cr4_pge = (cr4 & X86_CR4_PGE) && !tdp_enabled;
- kvm_mmu_sync_global(vcpu);
kvm_mmu_reset_context(vcpu);
}
EXPORT_SYMBOL_GPL(kvm_set_cr4);
@@ -4360,7 +4357,6 @@ struct kvm *kvm_arch_create_vm(void)
return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&kvm->arch.active_mmu_pages);
- INIT_LIST_HEAD(&kvm->arch.oos_global_pages);
INIT_LIST_HEAD(&kvm->arch.assigned_dev_head);
/* Reserve bit 0 of irq_sources_bitmap for userspace irq source */
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 22/46] KVM: x86 emulator: Add decoding of 16bit second immediate argument
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (20 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 21/46] KVM: MMU: remove global page optimization logic Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 23/46] KVM: x86 emulator: Add lcall decoding Avi Kivity
` (23 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Such as segment number in lcall/ljmp
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 7 +++++++
1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index d7c9f6f..c015063 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -76,6 +76,7 @@
#define Src2CL (1<<29)
#define Src2ImmByte (2<<29)
#define Src2One (3<<29)
+#define Src2Imm16 (4<<29)
#define Src2Mask (7<<29)
enum {
@@ -1072,6 +1073,12 @@ done_prefixes:
c->src2.bytes = 1;
c->src2.val = insn_fetch(u8, 1, c->eip);
break;
+ case Src2Imm16:
+ c->src2.type = OP_IMM;
+ c->src2.ptr = (unsigned long *)c->eip;
+ c->src2.bytes = 2;
+ c->src2.val = insn_fetch(u16, 2, c->eip);
+ break;
case Src2One:
c->src2.bytes = 1;
c->src2.val = 1;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 23/46] KVM: x86 emulator: Add lcall decoding
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (21 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 22/46] KVM: x86 emulator: Add decoding of 16bit second immediate argument Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 24/46] KVM: x86 emulator: Complete ljmp decoding at decode stage Avi Kivity
` (22 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
No emulation yet.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index c015063..71b4bee 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -154,7 +154,8 @@ static u32 opcode_table[256] = {
/* 0x90 - 0x97 */
DstReg, DstReg, DstReg, DstReg, DstReg, DstReg, DstReg, DstReg,
/* 0x98 - 0x9F */
- 0, 0, 0, 0, ImplicitOps | Stack, ImplicitOps | Stack, 0, 0,
+ 0, 0, SrcImm | Src2Imm16, 0,
+ ImplicitOps | Stack, ImplicitOps | Stack, 0, 0,
/* 0xA0 - 0xA7 */
ByteOp | DstReg | SrcMem | Mov | MemAbs, DstReg | SrcMem | Mov | MemAbs,
ByteOp | DstMem | SrcReg | Mov | MemAbs, DstMem | SrcReg | Mov | MemAbs,
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 24/46] KVM: x86 emulator: Complete ljmp decoding at decode stage
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (22 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 23/46] KVM: x86 emulator: Add lcall decoding Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 25/46] KVM: x86 emulator: Complete short/near jcc decoding in " Avi Kivity
` (21 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 25 +++++--------------------
1 files changed, 5 insertions(+), 20 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 71b4bee..8779cf2 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -193,7 +193,7 @@ static u32 opcode_table[256] = {
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
/* 0xE8 - 0xEF */
ImplicitOps | Stack, SrcImm | ImplicitOps,
- ImplicitOps, SrcImmByte | ImplicitOps,
+ SrcImm | Src2Imm16, SrcImmByte | ImplicitOps,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
/* 0xF0 - 0xF7 */
@@ -1805,30 +1805,15 @@ special_insn:
}
case 0xe9: /* jmp rel */
goto jmp;
- case 0xea: /* jmp far */ {
- uint32_t eip;
- uint16_t sel;
-
- switch (c->op_bytes) {
- case 2:
- eip = insn_fetch(u16, 2, c->eip);
- break;
- case 4:
- eip = insn_fetch(u32, 4, c->eip);
- break;
- default:
- DPRINTF("jmp far: Invalid op_bytes\n");
- goto cannot_emulate;
- }
- sel = insn_fetch(u16, 2, c->eip);
- if (kvm_load_segment_descriptor(ctxt->vcpu, sel, 9, VCPU_SREG_CS) < 0) {
+ case 0xea: /* jmp far */
+ if (kvm_load_segment_descriptor(ctxt->vcpu, c->src2.val, 9,
+ VCPU_SREG_CS) < 0) {
DPRINTF("jmp far: Failed to load CS descriptor\n");
goto cannot_emulate;
}
- c->eip = eip;
+ c->eip = c->src.val;
break;
- }
case 0xeb:
jmp: /* jmp rel short */
jmp_rel(c, c->src.val);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 25/46] KVM: x86 emulator: Complete short/near jcc decoding in decode stage
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (23 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 24/46] KVM: x86 emulator: Complete ljmp decoding at decode stage Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 26/46] KVM: x86 emulator: Complete decoding of call near " Avi Kivity
` (20 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 42 ++++++++++--------------------------------
1 files changed, 10 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 8779cf2..14b8ee2 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -136,11 +136,11 @@ static u32 opcode_table[256] = {
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps, /* insb, insw/insd */
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps, /* outsb, outsw/outsd */
/* 0x70 - 0x77 */
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+ SrcImmByte, SrcImmByte, SrcImmByte, SrcImmByte,
+ SrcImmByte, SrcImmByte, SrcImmByte, SrcImmByte,
/* 0x78 - 0x7F */
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+ SrcImmByte, SrcImmByte, SrcImmByte, SrcImmByte,
+ SrcImmByte, SrcImmByte, SrcImmByte, SrcImmByte,
/* 0x80 - 0x87 */
Group | Group1_80, Group | Group1_81,
Group | Group1_82, Group | Group1_83,
@@ -232,10 +232,8 @@ static u32 twobyte_table[256] = {
/* 0x70 - 0x7F */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
/* 0x80 - 0x8F */
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
- ImplicitOps, ImplicitOps, ImplicitOps, ImplicitOps,
+ SrcImm, SrcImm, SrcImm, SrcImm, SrcImm, SrcImm, SrcImm, SrcImm,
+ SrcImm, SrcImm, SrcImm, SrcImm, SrcImm, SrcImm, SrcImm, SrcImm,
/* 0x90 - 0x9F */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
/* 0xA0 - 0xA7 */
@@ -1539,13 +1537,10 @@ special_insn:
return -1;
}
return 0;
- case 0x70 ... 0x7f: /* jcc (short) */ {
- int rel = insn_fetch(s8, 1, c->eip);
-
+ case 0x70 ... 0x7f: /* jcc (short) */
if (test_cc(c->b, ctxt->eflags))
- jmp_rel(c, rel);
+ jmp_rel(c, c->src.val);
break;
- }
case 0x80 ... 0x83: /* Grp1 */
switch (c->modrm_reg) {
case 0:
@@ -2031,28 +2026,11 @@ twobyte_insn:
if (!test_cc(c->b, ctxt->eflags))
c->dst.type = OP_NONE; /* no writeback */
break;
- case 0x80 ... 0x8f: /* jnz rel, etc*/ {
- long int rel;
-
- switch (c->op_bytes) {
- case 2:
- rel = insn_fetch(s16, 2, c->eip);
- break;
- case 4:
- rel = insn_fetch(s32, 4, c->eip);
- break;
- case 8:
- rel = insn_fetch(s64, 8, c->eip);
- break;
- default:
- DPRINTF("jnz: Invalid op_bytes\n");
- goto cannot_emulate;
- }
+ case 0x80 ... 0x8f: /* jnz rel, etc*/
if (test_cc(c->b, ctxt->eflags))
- jmp_rel(c, rel);
+ jmp_rel(c, c->src.val);
c->dst.type = OP_NONE;
break;
- }
case 0xa3:
bt: /* bt */
c->dst.type = OP_NONE;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 26/46] KVM: x86 emulator: Complete decoding of call near in decode stage
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (24 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 25/46] KVM: x86 emulator: Complete short/near jcc decoding in " Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 27/46] KVM: x86 emulator: Add unsigned byte immediate decode Avi Kivity
` (19 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 15 ++-------------
1 files changed, 2 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 14b8ee2..4a9cd4c 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -192,7 +192,7 @@ static u32 opcode_table[256] = {
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
/* 0xE8 - 0xEF */
- ImplicitOps | Stack, SrcImm | ImplicitOps,
+ SrcImm | Stack, SrcImm | ImplicitOps,
SrcImm | Src2Imm16, SrcImmByte | ImplicitOps,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
@@ -1781,18 +1781,7 @@ special_insn:
io_dir_in = 0;
goto do_io;
case 0xe8: /* call (near) */ {
- long int rel;
- switch (c->op_bytes) {
- case 2:
- rel = insn_fetch(s16, 2, c->eip);
- break;
- case 4:
- rel = insn_fetch(s32, 4, c->eip);
- break;
- default:
- DPRINTF("Call: Invalid op_bytes\n");
- goto cannot_emulate;
- }
+ long int rel = c->src.val;
c->src.val = (unsigned long) c->eip;
jmp_rel(c, rel);
emulate_push(ctxt);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 27/46] KVM: x86 emulator: Add unsigned byte immediate decode
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (25 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 26/46] KVM: x86 emulator: Complete decoding of call near " Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 28/46] KVM: x86 emulator: Completely decode in/out at decoding stage Avi Kivity
` (18 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Extend "Source operand type" opcode description field to 4 bites
to accommodate new option.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 17 +++++++++++------
1 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 4a9cd4c..0988a13 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -59,13 +59,14 @@
#define SrcImm (5<<4) /* Immediate operand. */
#define SrcImmByte (6<<4) /* 8-bit sign-extended immediate operand. */
#define SrcOne (7<<4) /* Implied '1' */
-#define SrcMask (7<<4)
+#define SrcImmUByte (8<<4) /* 8-bit unsigned immediate operand. */
+#define SrcMask (0xf<<4)
/* Generic ModRM decode. */
-#define ModRM (1<<7)
+#define ModRM (1<<8)
/* Destination is only written; never read. */
-#define Mov (1<<8)
-#define BitOp (1<<9)
-#define MemAbs (1<<10) /* Memory operand is absolute displacement */
+#define Mov (1<<9)
+#define BitOp (1<<10)
+#define MemAbs (1<<11) /* Memory operand is absolute displacement */
#define String (1<<12) /* String instruction (rep capable) */
#define Stack (1<<13) /* Stack instruction (push/pop) */
#define Group (1<<14) /* Bits 3:5 of modrm byte extend opcode */
@@ -1044,10 +1045,14 @@ done_prefixes:
}
break;
case SrcImmByte:
+ case SrcImmUByte:
c->src.type = OP_IMM;
c->src.ptr = (unsigned long *)c->eip;
c->src.bytes = 1;
- c->src.val = insn_fetch(s8, 1, c->eip);
+ if ((c->d & SrcMask) == SrcImmByte)
+ c->src.val = insn_fetch(s8, 1, c->eip);
+ else
+ c->src.val = insn_fetch(u8, 1, c->eip);
break;
case SrcOne:
c->src.bytes = 1;
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 28/46] KVM: x86 emulator: Completely decode in/out at decoding stage
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (26 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 27/46] KVM: x86 emulator: Add unsigned byte immediate decode Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 29/46] KVM: x86 emulator: Decode soft interrupt instructions Avi Kivity
` (17 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index 0988a13..c2f55ca 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -190,8 +190,8 @@ static u32 opcode_table[256] = {
0, 0, 0, 0, 0, 0, 0, 0,
/* 0xE0 - 0xE7 */
0, 0, 0, 0,
- SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
- SrcNone | ByteOp | ImplicitOps, SrcNone | ImplicitOps,
+ ByteOp | SrcImmUByte, SrcImmUByte,
+ ByteOp | SrcImmUByte, SrcImmUByte,
/* 0xE8 - 0xEF */
SrcImm | Stack, SrcImm | ImplicitOps,
SrcImm | Src2Imm16, SrcImmByte | ImplicitOps,
@@ -1777,12 +1777,12 @@ special_insn:
break;
case 0xe4: /* inb */
case 0xe5: /* in */
- port = insn_fetch(u8, 1, c->eip);
+ port = c->src.val;
io_dir_in = 1;
goto do_io;
case 0xe6: /* outb */
case 0xe7: /* out */
- port = insn_fetch(u8, 1, c->eip);
+ port = c->src.val;
io_dir_in = 0;
goto do_io;
case 0xe8: /* call (near) */ {
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 29/46] KVM: x86 emulator: Decode soft interrupt instructions
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (27 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 28/46] KVM: x86 emulator: Completely decode in/out at decoding stage Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 30/46] KVM: x86 emulator: Add new mode of instruction emulation: skip Avi Kivity
` (16 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Do not emulate them yet.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86_emulate.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86_emulate.c b/arch/x86/kvm/x86_emulate.c
index c2f55ca..d2664fc 100644
--- a/arch/x86/kvm/x86_emulate.c
+++ b/arch/x86/kvm/x86_emulate.c
@@ -181,7 +181,8 @@ static u32 opcode_table[256] = {
0, ImplicitOps | Stack, 0, 0,
ByteOp | DstMem | SrcImm | ModRM | Mov, DstMem | SrcImm | ModRM | Mov,
/* 0xC8 - 0xCF */
- 0, 0, 0, ImplicitOps | Stack, 0, 0, 0, 0,
+ 0, 0, 0, ImplicitOps | Stack,
+ ImplicitOps, SrcImmByte, ImplicitOps, ImplicitOps,
/* 0xD0 - 0xD7 */
ByteOp | DstMem | SrcImplicit | ModRM, DstMem | SrcImplicit | ModRM,
ByteOp | DstMem | SrcImplicit | ModRM, DstMem | SrcImplicit | ModRM,
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 30/46] KVM: x86 emulator: Add new mode of instruction emulation: skip
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (28 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 29/46] KVM: x86 emulator: Decode soft interrupt instructions Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 31/46] KVM: SVM: Skip instruction on a task switch only when appropriate Avi Kivity
` (15 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
In the new mode instruction is decoded, but not executed. The EIP
is moved to point after the instruction.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 5 +++++
2 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0e3a7c6..cb306cf 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -562,6 +562,7 @@ enum emulation_result {
#define EMULTYPE_NO_DECODE (1 << 0)
#define EMULTYPE_TRAP_UD (1 << 1)
+#define EMULTYPE_SKIP (1 << 2)
int emulate_instruction(struct kvm_vcpu *vcpu, struct kvm_run *run,
unsigned long cr2, u16 error_code, int emulation_type);
void kvm_report_emulation_failure(struct kvm_vcpu *cvpu, const char *context);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9b89d9b..0c45df9 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2409,6 +2409,11 @@ int emulate_instruction(struct kvm_vcpu *vcpu,
}
}
+ if (emulation_type & EMULTYPE_SKIP) {
+ kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.decode.eip);
+ return EMULATE_DONE;
+ }
+
r = x86_emulate_insn(&vcpu->arch.emulate_ctxt, &emulate_ops);
if (vcpu->arch.pio.string)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 31/46] KVM: SVM: Skip instruction on a task switch only when appropriate
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (29 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 30/46] KVM: x86 emulator: Add new mode of instruction emulation: skip Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 32/46] KVM: Replace kvmclock open-coded get_cpu_var() with the real thing Avi Kivity
` (14 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
If a task switch was initiated because off a task gate in IDT and IDT
was accessed because of an external even the instruction should not
be skipped.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/svm.c | 11 +++++++++--
1 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index bba67b7..8fc6eea 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -1828,6 +1828,7 @@ static int task_switch_interception(struct vcpu_svm *svm,
int reason;
int int_type = svm->vmcb->control.exit_int_info &
SVM_EXITINTINFO_TYPE_MASK;
+ int int_vec = svm->vmcb->control.exit_int_info & SVM_EVTINJ_VEC_MASK;
tss_selector = (u16)svm->vmcb->control.exit_info_1;
@@ -1843,8 +1844,14 @@ static int task_switch_interception(struct vcpu_svm *svm,
reason = TASK_SWITCH_CALL;
- if (reason != TASK_SWITCH_GATE || int_type == SVM_EXITINTINFO_TYPE_SOFT)
- skip_emulated_instruction(&svm->vcpu);
+ if (reason != TASK_SWITCH_GATE ||
+ int_type == SVM_EXITINTINFO_TYPE_SOFT ||
+ (int_type == SVM_EXITINTINFO_TYPE_EXEPT &&
+ (int_vec == OF_VECTOR || int_vec == BP_VECTOR))) {
+ if (emulate_instruction(&svm->vcpu, kvm_run, 0, 0,
+ EMULTYPE_SKIP) != EMULATE_DONE)
+ return 0;
+ }
return kvm_task_switch(&svm->vcpu, tss_selector, reason);
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 32/46] KVM: Replace kvmclock open-coded get_cpu_var() with the real thing
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (30 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 31/46] KVM: SVM: Skip instruction on a task switch only when appropriate Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 33/46] KVM: ia64: Don't hold slots_lock in guest mode Avi Kivity
` (13 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
Suggested by Ingo Molnar.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 11 ++++++-----
1 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0c45df9..a7ea26e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -627,16 +627,17 @@ static void kvm_write_guest_time(struct kvm_vcpu *v)
unsigned long flags;
struct kvm_vcpu_arch *vcpu = &v->arch;
void *shared_kaddr;
+ unsigned long this_tsc_khz;
if ((!vcpu->time_page))
return;
- preempt_disable();
- if (unlikely(vcpu->hv_clock_tsc_khz != __get_cpu_var(cpu_tsc_khz))) {
- kvm_set_time_scale(__get_cpu_var(cpu_tsc_khz), &vcpu->hv_clock);
- vcpu->hv_clock_tsc_khz = __get_cpu_var(cpu_tsc_khz);
+ this_tsc_khz = get_cpu_var(cpu_tsc_khz);
+ if (unlikely(vcpu->hv_clock_tsc_khz != this_tsc_khz)) {
+ kvm_set_time_scale(this_tsc_khz, &vcpu->hv_clock);
+ vcpu->hv_clock_tsc_khz = this_tsc_khz;
}
- preempt_enable();
+ put_cpu_var(cpu_tsc_khz);
/* Keep irq disabled to prevent changes to the clock */
local_irq_save(flags);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 33/46] KVM: ia64: Don't hold slots_lock in guest mode
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (31 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 32/46] KVM: Replace kvmclock open-coded get_cpu_var() with the real thing Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 34/46] KVM: x86: check for cr3 validity in ioctl_set_sregs Avi Kivity
` (12 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Jes Sorensen <jes@sgi.com>
Reorder locking to avoid holding the slots_lock when entering
the guest.
Signed-off-by: Jes Sorensen <jes@sgi.com>
Acked-by : Xiantao Zhang<xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 64 +++++++++++++++++++++++----------------------
1 files changed, 33 insertions(+), 31 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 3bf0a34..f127fb7 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -632,34 +632,22 @@ static int kvm_vcpu_pre_transition(struct kvm_vcpu *vcpu)
vti_set_rr6(vcpu->arch.vmm_rr);
return kvm_insert_vmm_mapping(vcpu);
}
+
static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu)
{
kvm_purge_vmm_mapping(vcpu);
vti_set_rr6(vcpu->arch.host_rr6);
}
-static int vti_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
union context *host_ctx, *guest_ctx;
int r;
- /*Get host and guest context with guest address space.*/
- host_ctx = kvm_get_host_context(vcpu);
- guest_ctx = kvm_get_guest_context(vcpu);
-
- r = kvm_vcpu_pre_transition(vcpu);
- if (r < 0)
- goto out;
- kvm_vmm_info->tramp_entry(host_ctx, guest_ctx);
- kvm_vcpu_post_transition(vcpu);
- r = 0;
-out:
- return r;
-}
-
-static int __vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
-{
- int r;
+ /*
+ * down_read() may sleep and return with interrupts enabled
+ */
+ down_read(&vcpu->kvm->slots_lock);
again:
if (signal_pending(current)) {
@@ -668,23 +656,28 @@ again:
goto out;
}
- /*
- * down_read() may sleep and return with interrupts enabled
- */
- down_read(&vcpu->kvm->slots_lock);
-
preempt_disable();
local_irq_disable();
+ /*Get host and guest context with guest address space.*/
+ host_ctx = kvm_get_host_context(vcpu);
+ guest_ctx = kvm_get_guest_context(vcpu);
+
vcpu->guest_mode = 1;
+
+ r = kvm_vcpu_pre_transition(vcpu);
+ if (r < 0)
+ goto vcpu_run_fail;
+
+ up_read(&vcpu->kvm->slots_lock);
kvm_guest_enter();
- r = vti_vcpu_run(vcpu, kvm_run);
- if (r < 0) {
- local_irq_enable();
- preempt_enable();
- kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
- goto out;
- }
+
+ /*
+ * Transition to the guest
+ */
+ kvm_vmm_info->tramp_entry(host_ctx, guest_ctx);
+
+ kvm_vcpu_post_transition(vcpu);
vcpu->arch.launched = 1;
vcpu->guest_mode = 0;
@@ -698,9 +691,10 @@ again:
*/
barrier();
kvm_guest_exit();
- up_read(&vcpu->kvm->slots_lock);
preempt_enable();
+ down_read(&vcpu->kvm->slots_lock);
+
r = kvm_handle_exit(kvm_run, vcpu);
if (r > 0) {
@@ -709,12 +703,20 @@ again:
}
out:
+ up_read(&vcpu->kvm->slots_lock);
if (r > 0) {
kvm_resched(vcpu);
+ down_read(&vcpu->kvm->slots_lock);
goto again;
}
return r;
+
+vcpu_run_fail:
+ local_irq_enable();
+ preempt_enable();
+ kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
+ goto out;
}
static void kvm_set_mmio_data(struct kvm_vcpu *vcpu)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 34/46] KVM: x86: check for cr3 validity in ioctl_set_sregs
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (32 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 33/46] KVM: ia64: Don't hold slots_lock in guest mode Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 35/46] KVM: ia64: Flush all TLBs once guest's memory mapping changes Avi Kivity
` (11 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Marcelo Tosatti <mtosatti@redhat.com>
Matt T. Yourst notes that kvm_arch_vcpu_ioctl_set_sregs lacks validity
checking for the new cr3 value:
"Userspace callers of KVM_SET_SREGS can pass a bogus value of cr3 to
the kernel. This will trigger a NULL pointer access in gfn_to_rmap()
when userspace next tries to call KVM_RUN on the affected VCPU and kvm
attempts to activate the new non-existent page table root.
This happens since kvm only validates that cr3 points to a valid guest
physical memory page when code *inside* the guest sets cr3. However, kvm
currently trusts the userspace caller (e.g. QEMU) on the host machine to
always supply a valid page table root, rather than properly validating
it along with the rest of the reloaded guest state."
http://sourceforge.net/tracker/?func=detail&atid=893831&aid=2687641&group_id=180599
Check for a valid cr3 address in kvm_arch_vcpu_ioctl_set_sregs, triple
fault in case of failure.
Cc: stable@kernel.org
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a7ea26e..6fc53bb 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3989,7 +3989,13 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
vcpu->arch.cr2 = sregs->cr2;
mmu_reset_needed |= vcpu->arch.cr3 != sregs->cr3;
- vcpu->arch.cr3 = sregs->cr3;
+
+ down_read(&vcpu->kvm->slots_lock);
+ if (gfn_to_memslot(vcpu->kvm, sregs->cr3 >> PAGE_SHIFT))
+ vcpu->arch.cr3 = sregs->cr3;
+ else
+ set_bit(KVM_REQ_TRIPLE_FAULT, &vcpu->requests);
+ up_read(&vcpu->kvm->slots_lock);
kvm_set_cr8(vcpu, sregs->cr8);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 35/46] KVM: ia64: Flush all TLBs once guest's memory mapping changes.
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (33 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 34/46] KVM: x86: check for cr3 validity in ioctl_set_sregs Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 36/46] KVM: ia64: remove empty function vti_vcpu_load() Avi Kivity
` (10 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Xiantao Zhang <xiantao.zhang@intel.com>
Flush all vcpu's TLB entries once changes guest's memory mapping.
Signed-off-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index f127fb7..9addca6 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1631,6 +1631,7 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
void kvm_arch_flush_shadow(struct kvm *kvm)
{
+ kvm_flush_remote_tlbs(kvm);
}
long kvm_arch_dev_ioctl(struct file *filp,
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 36/46] KVM: ia64: remove empty function vti_vcpu_load()
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (34 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 35/46] KVM: ia64: Flush all TLBs once guest's memory mapping changes Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 37/46] KVM: ia64: restore irq state before calling kvm_vcpu_init Avi Kivity
` (9 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Jes Sorensen <jes@sgi.com>
vti_vcpu_load() doesn't do anything, so lets get rid of it.
Signed-off-by: Jes Sorensen <jes@sgi.com>
Acked-by : Xiantao Zhang<xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 5 -----
1 files changed, 0 insertions(+), 5 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 9addca6..7263171 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1100,10 +1100,6 @@ static void kvm_free_vmm_area(void)
}
}
-static void vti_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
-{
-}
-
static int vti_init_vpd(struct kvm_vcpu *vcpu)
{
int i;
@@ -1348,7 +1344,6 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
vcpu->kvm = kvm;
cpu = get_cpu();
- vti_vcpu_load(vcpu, cpu);
r = vti_vcpu_setup(vcpu, id);
put_cpu();
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 37/46] KVM: ia64: restore irq state before calling kvm_vcpu_init
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (35 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 36/46] KVM: ia64: remove empty function vti_vcpu_load() Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 38/46] KVM: ia64: preserve int status through call to kvm_insert_vmm_mapping Avi Kivity
` (8 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Jes Sorensen <jes@sgi.com>
Make sure to restore the psr after calling kvm_insert_vmm_mapping()
which calls ia64_itr_entry() as it disables local interrupts and
kvm_vcpu_init() may sleep.
Avoids a warning from the lock debugging code.
Signed-off-by: Jes Sorensen <jes@sgi.com>
Acked-by : Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 3 +--
1 files changed, 1 insertions(+), 2 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 7263171..5b868db 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -1290,6 +1290,7 @@ static int vti_vcpu_setup(struct kvm_vcpu *vcpu, int id)
local_irq_save(psr);
r = kvm_insert_vmm_mapping(vcpu);
+ local_irq_restore(psr);
if (r)
goto fail;
r = kvm_vcpu_init(vcpu, vcpu->kvm, id);
@@ -1307,13 +1308,11 @@ static int vti_vcpu_setup(struct kvm_vcpu *vcpu, int id)
goto uninit;
kvm_purge_vmm_mapping(vcpu);
- local_irq_restore(psr);
return 0;
uninit:
kvm_vcpu_uninit(vcpu);
fail:
- local_irq_restore(psr);
return r;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 38/46] KVM: ia64: preserve int status through call to kvm_insert_vmm_mapping
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (36 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 37/46] KVM: ia64: restore irq state before calling kvm_vcpu_init Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 39/46] KVM: ia64: ia64 vcpu_reset() do not call kmalloc() with irqs disabled Avi Kivity
` (7 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Jes Sorensen <jes@sgi.com>
Preserve interrupt status around call to kvm_insert_vmm_mappin()
in kvm_vcpu_pre_transition().
Signed-off-by: Jes Sorensen <jes@sgi.com>
Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 7 ++++++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index 5b868db..cf5a193 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -619,6 +619,8 @@ static void kvm_purge_vmm_mapping(struct kvm_vcpu *vcpu)
static int kvm_vcpu_pre_transition(struct kvm_vcpu *vcpu)
{
+ unsigned long psr;
+ int r;
int cpu = smp_processor_id();
if (vcpu->arch.last_run_cpu != cpu ||
@@ -630,7 +632,10 @@ static int kvm_vcpu_pre_transition(struct kvm_vcpu *vcpu)
vcpu->arch.host_rr6 = ia64_get_rr(RR6);
vti_set_rr6(vcpu->arch.vmm_rr);
- return kvm_insert_vmm_mapping(vcpu);
+ local_irq_save(psr);
+ r = kvm_insert_vmm_mapping(vcpu);
+ local_irq_restore(psr);
+ return r;
}
static void kvm_vcpu_post_transition(struct kvm_vcpu *vcpu)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 39/46] KVM: ia64: ia64 vcpu_reset() do not call kmalloc() with irqs disabled
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (37 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 38/46] KVM: ia64: preserve int status through call to kvm_insert_vmm_mapping Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 40/46] KVM: MMU: Fix auditing code Avi Kivity
` (6 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Jes Sorensen <jes@sgi.com>
Restore local irq enabled state before calling kvm_arch_vcpu_init(),
which calls kmalloc(GFP_KERNEL).
Signed-off-by: Jes Sorensen <jes@sgi.com>
Acked-by: Xiantao Zhang <xiantao.zhang@intel.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/ia64/kvm/kvm-ia64.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/ia64/kvm/kvm-ia64.c b/arch/ia64/kvm/kvm-ia64.c
index cf5a193..be4413e 100644
--- a/arch/ia64/kvm/kvm-ia64.c
+++ b/arch/ia64/kvm/kvm-ia64.c
@@ -2001,6 +2001,7 @@ static int vcpu_reset(struct kvm_vcpu *vcpu)
long psr;
local_irq_save(psr);
r = kvm_insert_vmm_mapping(vcpu);
+ local_irq_restore(psr);
if (r)
goto fail;
@@ -2013,7 +2014,6 @@ static int vcpu_reset(struct kvm_vcpu *vcpu)
kvm_purge_vmm_mapping(vcpu);
r = 0;
fail:
- local_irq_restore(psr);
return r;
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 40/46] KVM: MMU: Fix auditing code
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (38 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 39/46] KVM: ia64: ia64 vcpu_reset() do not call kmalloc() with irqs disabled Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 41/46] KVM: Make kvm_cpu_(has|get)_interrupt() work for userspace irqchip too Avi Kivity
` (5 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Jan Kiszka <jan.kiszka@web.de>
Fix build breakage of hpa lookup in audit_mappings_page. Moreover, make
this function robust against shadow_notrap_nonpresent_pte entries.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 8 +++++---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5b79afa..a55373c 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3030,11 +3030,13 @@ static void audit_mappings_page(struct kvm_vcpu *vcpu, u64 page_pte,
" in nonleaf level: levels %d gva %lx"
" level %d pte %llx\n", audit_msg,
vcpu->arch.mmu.root_level, va, level, ent);
-
- audit_mappings_page(vcpu, ent, va, level - 1);
+ else
+ audit_mappings_page(vcpu, ent, va, level - 1);
} else {
gpa_t gpa = vcpu->arch.mmu.gva_to_gpa(vcpu, va);
- hpa_t hpa = (hpa_t)gpa_to_pfn(vcpu, gpa) << PAGE_SHIFT;
+ gfn_t gfn = gpa >> PAGE_SHIFT;
+ pfn_t pfn = gfn_to_pfn(vcpu->kvm, gfn);
+ hpa_t hpa = (hpa_t)pfn << PAGE_SHIFT;
if (is_shadow_present_pte(ent)
&& (ent & PT64_BASE_ADDR_MASK) != hpa)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 41/46] KVM: Make kvm_cpu_(has|get)_interrupt() work for userspace irqchip too
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (39 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 40/46] KVM: MMU: Fix auditing code Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 42/46] KVM: VMX: Consolidate userspace and kernel interrupt injection for VMX Avi Kivity
` (4 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
At the vector level, kernel and userspace irqchip are fairly similar.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/irq.c | 7 +++++++
arch/x86/kvm/svm.c | 11 +++++++----
arch/x86/kvm/vmx.c | 18 +++++++++---------
arch/x86/kvm/x86.c | 4 ++--
4 files changed, 25 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/irq.c b/arch/x86/kvm/irq.c
index cf17ed5..11c2757 100644
--- a/arch/x86/kvm/irq.c
+++ b/arch/x86/kvm/irq.c
@@ -24,6 +24,7 @@
#include "irq.h"
#include "i8254.h"
+#include "x86.h"
/*
* check if there are pending timer events
@@ -48,6 +49,9 @@ int kvm_cpu_has_interrupt(struct kvm_vcpu *v)
{
struct kvm_pic *s;
+ if (!irqchip_in_kernel(v->kvm))
+ return v->arch.irq_summary;
+
if (kvm_apic_has_interrupt(v) == -1) { /* LAPIC */
if (kvm_apic_accept_pic_intr(v)) {
s = pic_irqchip(v->kvm); /* PIC */
@@ -67,6 +71,9 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v)
struct kvm_pic *s;
int vector;
+ if (!irqchip_in_kernel(v->kvm))
+ return kvm_pop_irq(v);
+
vector = kvm_get_apic_interrupt(v); /* APIC */
if (vector == -1) {
if (kvm_apic_accept_pic_intr(v)) {
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 8fc6eea..6eef6d2 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2091,8 +2091,9 @@ static int interrupt_window_interception(struct vcpu_svm *svm,
* If the user space waits to inject interrupts, exit as soon as
* possible
*/
- if (kvm_run->request_interrupt_window &&
- !svm->vcpu.arch.irq_summary) {
+ if (!irqchip_in_kernel(svm->vcpu.kvm) &&
+ kvm_run->request_interrupt_window &&
+ !kvm_cpu_has_interrupt(&svm->vcpu)) {
++svm->vcpu.stat.irq_window_exits;
kvm_run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
return 0;
@@ -2373,7 +2374,8 @@ static void do_interrupt_requests(struct kvm_vcpu *vcpu,
(svm->vmcb->save.rflags & X86_EFLAGS_IF) &&
(svm->vcpu.arch.hflags & HF_GIF_MASK));
- if (svm->vcpu.arch.interrupt_window_open && svm->vcpu.arch.irq_summary)
+ if (svm->vcpu.arch.interrupt_window_open &&
+ kvm_cpu_has_interrupt(&svm->vcpu))
/*
* If interrupts enabled, and not blocked by sti or mov ss. Good.
*/
@@ -2383,7 +2385,8 @@ static void do_interrupt_requests(struct kvm_vcpu *vcpu,
* Interrupts blocked. Wait for unblock.
*/
if (!svm->vcpu.arch.interrupt_window_open &&
- (svm->vcpu.arch.irq_summary || kvm_run->request_interrupt_window))
+ (kvm_cpu_has_interrupt(&svm->vcpu) ||
+ kvm_run->request_interrupt_window))
svm_set_vintr(svm);
else
svm_clear_vintr(svm);
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index c6997c0..b3292c1 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2535,21 +2535,20 @@ static void do_interrupt_requests(struct kvm_vcpu *vcpu,
vmx_inject_nmi(vcpu);
if (vcpu->arch.nmi_pending)
enable_nmi_window(vcpu);
- else if (vcpu->arch.irq_summary
- || kvm_run->request_interrupt_window)
+ else if (kvm_cpu_has_interrupt(vcpu) ||
+ kvm_run->request_interrupt_window)
enable_irq_window(vcpu);
return;
}
if (vcpu->arch.interrupt_window_open) {
- if (vcpu->arch.irq_summary && !vcpu->arch.interrupt.pending)
- kvm_queue_interrupt(vcpu, kvm_pop_irq(vcpu));
+ if (kvm_cpu_has_interrupt(vcpu) && !vcpu->arch.interrupt.pending)
+ kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
if (vcpu->arch.interrupt.pending)
vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
- }
- if (!vcpu->arch.interrupt_window_open &&
- (vcpu->arch.irq_summary || kvm_run->request_interrupt_window))
+ } else if(kvm_cpu_has_interrupt(vcpu) ||
+ kvm_run->request_interrupt_window)
enable_irq_window(vcpu);
}
@@ -2976,8 +2975,9 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu,
* If the user space waits to inject interrupts, exit as soon as
* possible
*/
- if (kvm_run->request_interrupt_window &&
- !vcpu->arch.irq_summary) {
+ if (!irqchip_in_kernel(vcpu->kvm) &&
+ kvm_run->request_interrupt_window &&
+ !kvm_cpu_has_interrupt(vcpu)) {
kvm_run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
return 0;
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6fc53bb..0c9c13c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3065,7 +3065,7 @@ EXPORT_SYMBOL_GPL(kvm_emulate_cpuid);
static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu,
struct kvm_run *kvm_run)
{
- return (!vcpu->arch.irq_summary &&
+ return (!irqchip_in_kernel(vcpu->kvm) && !kvm_cpu_has_interrupt(vcpu) &&
kvm_run->request_interrupt_window &&
vcpu->arch.interrupt_window_open &&
(kvm_x86_ops->get_rflags(vcpu) & X86_EFLAGS_IF));
@@ -3082,7 +3082,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu,
else
kvm_run->ready_for_interrupt_injection =
(vcpu->arch.interrupt_window_open &&
- vcpu->arch.irq_summary == 0);
+ !kvm_cpu_has_interrupt(vcpu));
}
static void vapic_enter(struct kvm_vcpu *vcpu)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 42/46] KVM: VMX: Consolidate userspace and kernel interrupt injection for VMX
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (40 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 41/46] KVM: Make kvm_cpu_(has|get)_interrupt() work for userspace irqchip too Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 43/46] KVM: VMX: Cleanup vmx_intr_assist() Avi Kivity
` (3 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Use the same callback to inject irq/nmi events no matter what irqchip is
in use. Only from VMX for now.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/svm.c | 2 +-
arch/x86/kvm/vmx.c | 71 +++++++++------------------------------
arch/x86/kvm/x86.c | 2 +-
4 files changed, 19 insertions(+), 58 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index cb306cf..5edae35 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -517,7 +517,7 @@ struct kvm_x86_ops {
void (*queue_exception)(struct kvm_vcpu *vcpu, unsigned nr,
bool has_error_code, u32 error_code);
bool (*exception_injected)(struct kvm_vcpu *vcpu);
- void (*inject_pending_irq)(struct kvm_vcpu *vcpu);
+ void (*inject_pending_irq)(struct kvm_vcpu *vcpu, struct kvm_run *run);
void (*inject_pending_vectors)(struct kvm_vcpu *vcpu,
struct kvm_run *run);
int (*interrupt_allowed)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 6eef6d2..f2933ab 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -2298,7 +2298,7 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
(svm->vcpu.arch.hflags & HF_GIF_MASK);
}
-static void svm_intr_assist(struct kvm_vcpu *vcpu)
+static void svm_intr_assist(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
struct vcpu_svm *svm = to_svm(vcpu);
struct vmcb *vmcb = svm->vmcb;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index b3292c1..06252f7 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2510,48 +2510,6 @@ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
return vcpu->arch.interrupt_window_open;
}
-static void do_interrupt_requests(struct kvm_vcpu *vcpu,
- struct kvm_run *kvm_run)
-{
- vmx_update_window_states(vcpu);
-
- if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)
- vmcs_clear_bits(GUEST_INTERRUPTIBILITY_INFO,
- GUEST_INTR_STATE_STI |
- GUEST_INTR_STATE_MOV_SS);
-
- if (vcpu->arch.nmi_pending && !vcpu->arch.nmi_injected) {
- if (vcpu->arch.interrupt.pending) {
- enable_nmi_window(vcpu);
- } else if (vcpu->arch.nmi_window_open) {
- vcpu->arch.nmi_pending = false;
- vcpu->arch.nmi_injected = true;
- } else {
- enable_nmi_window(vcpu);
- return;
- }
- }
- if (vcpu->arch.nmi_injected) {
- vmx_inject_nmi(vcpu);
- if (vcpu->arch.nmi_pending)
- enable_nmi_window(vcpu);
- else if (kvm_cpu_has_interrupt(vcpu) ||
- kvm_run->request_interrupt_window)
- enable_irq_window(vcpu);
- return;
- }
-
- if (vcpu->arch.interrupt_window_open) {
- if (kvm_cpu_has_interrupt(vcpu) && !vcpu->arch.interrupt.pending)
- kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
-
- if (vcpu->arch.interrupt.pending)
- vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
- } else if(kvm_cpu_has_interrupt(vcpu) ||
- kvm_run->request_interrupt_window)
- enable_irq_window(vcpu);
-}
-
static int vmx_set_tss_addr(struct kvm *kvm, unsigned int addr)
{
int ret;
@@ -3351,8 +3309,11 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
}
}
-static void vmx_intr_assist(struct kvm_vcpu *vcpu)
+static void vmx_intr_assist(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
+ bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
+ kvm_run->request_interrupt_window;
+
update_tpr_threshold(vcpu);
vmx_update_window_states(vcpu);
@@ -3373,25 +3334,25 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu)
return;
}
}
+
if (vcpu->arch.nmi_injected) {
vmx_inject_nmi(vcpu);
- if (vcpu->arch.nmi_pending)
- enable_nmi_window(vcpu);
- else if (kvm_cpu_has_interrupt(vcpu))
- enable_irq_window(vcpu);
- return;
+ goto out;
}
+
if (!vcpu->arch.interrupt.pending && kvm_cpu_has_interrupt(vcpu)) {
if (vcpu->arch.interrupt_window_open)
kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
- else
- enable_irq_window(vcpu);
}
- if (vcpu->arch.interrupt.pending) {
+
+ if (vcpu->arch.interrupt.pending)
vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
- if (kvm_cpu_has_interrupt(vcpu))
- enable_irq_window(vcpu);
- }
+
+out:
+ if (vcpu->arch.nmi_pending)
+ enable_nmi_window(vcpu);
+ else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+ enable_irq_window(vcpu);
}
/*
@@ -3733,7 +3694,7 @@ static struct kvm_x86_ops vmx_x86_ops = {
.queue_exception = vmx_queue_exception,
.exception_injected = vmx_exception_injected,
.inject_pending_irq = vmx_intr_assist,
- .inject_pending_vectors = do_interrupt_requests,
+ .inject_pending_vectors = vmx_intr_assist,
.interrupt_allowed = vmx_interrupt_allowed,
.set_tss_addr = vmx_set_tss_addr,
.get_tdp_level = get_ept_level,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 0c9c13c..7a572ec 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3169,7 +3169,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
if (vcpu->arch.exception.pending)
__queue_exception(vcpu);
else if (irqchip_in_kernel(vcpu->kvm))
- kvm_x86_ops->inject_pending_irq(vcpu);
+ kvm_x86_ops->inject_pending_irq(vcpu, kvm_run);
else
kvm_x86_ops->inject_pending_vectors(vcpu, kvm_run);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 43/46] KVM: VMX: Cleanup vmx_intr_assist()
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (41 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 42/46] KVM: VMX: Consolidate userspace and kernel interrupt injection for VMX Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 44/46] KVM: Use kvm_arch_interrupt_allowed() instead of checking interrupt_window_open directly Avi Kivity
` (2 subsequent siblings)
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/vmx.c | 55 ++++++++++++++++++++++++++++-----------------------
1 files changed, 30 insertions(+), 25 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 06252f7..9eb518f 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3309,6 +3309,34 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx)
}
}
+static void vmx_intr_inject(struct kvm_vcpu *vcpu)
+{
+ /* try to reinject previous events if any */
+ if (vcpu->arch.nmi_injected) {
+ vmx_inject_nmi(vcpu);
+ return;
+ }
+
+ if (vcpu->arch.interrupt.pending) {
+ vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
+ return;
+ }
+
+ /* try to inject new event if pending */
+ if (vcpu->arch.nmi_pending) {
+ if (vcpu->arch.nmi_window_open) {
+ vcpu->arch.nmi_pending = false;
+ vcpu->arch.nmi_injected = true;
+ vmx_inject_nmi(vcpu);
+ }
+ } else if (kvm_cpu_has_interrupt(vcpu)) {
+ if (vcpu->arch.interrupt_window_open) {
+ kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
+ vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
+ }
+ }
+}
+
static void vmx_intr_assist(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
@@ -3323,32 +3351,9 @@ static void vmx_intr_assist(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
GUEST_INTR_STATE_STI |
GUEST_INTR_STATE_MOV_SS);
- if (vcpu->arch.nmi_pending && !vcpu->arch.nmi_injected) {
- if (vcpu->arch.interrupt.pending) {
- enable_nmi_window(vcpu);
- } else if (vcpu->arch.nmi_window_open) {
- vcpu->arch.nmi_pending = false;
- vcpu->arch.nmi_injected = true;
- } else {
- enable_nmi_window(vcpu);
- return;
- }
- }
-
- if (vcpu->arch.nmi_injected) {
- vmx_inject_nmi(vcpu);
- goto out;
- }
-
- if (!vcpu->arch.interrupt.pending && kvm_cpu_has_interrupt(vcpu)) {
- if (vcpu->arch.interrupt_window_open)
- kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
- }
-
- if (vcpu->arch.interrupt.pending)
- vmx_inject_irq(vcpu, vcpu->arch.interrupt.nr);
+ vmx_intr_inject(vcpu);
-out:
+ /* enable NMI/IRQ window open exits if needed */
if (vcpu->arch.nmi_pending)
enable_nmi_window(vcpu);
else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 44/46] KVM: Use kvm_arch_interrupt_allowed() instead of checking interrupt_window_open directly
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (42 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 43/46] KVM: VMX: Cleanup vmx_intr_assist() Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 45/46] KVM: SVM: Coalesce userspace/kernel irqchip interrupt injection logic Avi Kivity
2009-05-20 11:18 ` [PATCH 46/46] KVM: Remove exception_injected() callback Avi Kivity
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
kvm_arch_interrupt_allowed() also checks IF so drop the check.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/x86.c | 5 ++---
1 files changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 7a572ec..28919d2 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3067,8 +3067,7 @@ static int dm_request_for_irq_injection(struct kvm_vcpu *vcpu,
{
return (!irqchip_in_kernel(vcpu->kvm) && !kvm_cpu_has_interrupt(vcpu) &&
kvm_run->request_interrupt_window &&
- vcpu->arch.interrupt_window_open &&
- (kvm_x86_ops->get_rflags(vcpu) & X86_EFLAGS_IF));
+ kvm_arch_interrupt_allowed(vcpu));
}
static void post_kvm_run_save(struct kvm_vcpu *vcpu,
@@ -3081,7 +3080,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu,
kvm_run->ready_for_interrupt_injection = 1;
else
kvm_run->ready_for_interrupt_injection =
- (vcpu->arch.interrupt_window_open &&
+ (kvm_arch_interrupt_allowed(vcpu) &&
!kvm_cpu_has_interrupt(vcpu));
}
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 45/46] KVM: SVM: Coalesce userspace/kernel irqchip interrupt injection logic
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (43 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 44/46] KVM: Use kvm_arch_interrupt_allowed() instead of checking interrupt_window_open directly Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
2009-05-20 11:18 ` [PATCH 46/46] KVM: Remove exception_injected() callback Avi Kivity
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
Start to use interrupt/exception queues like VMX does.
This also fix the bug that if exit was caused by a guest
internal exception access to IDT the exception was not
reinjected.
Use EVENTINJ to inject interrupts. Use VINT only for detecting when IRQ
windows is open again. EVENTINJ ensures
the interrupt is injected immediately and not delayed.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/svm.c | 187 ++++++++++++++++++++++++----------------------------
1 files changed, 85 insertions(+), 102 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index f2933ab..a80ffaa 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -70,7 +70,6 @@ module_param(npt, int, S_IRUGO);
static int nested = 0;
module_param(nested, int, S_IRUGO);
-static void kvm_reput_irq(struct vcpu_svm *svm);
static void svm_flush_tlb(struct kvm_vcpu *vcpu);
static int nested_svm_exit_handled(struct vcpu_svm *svm, bool kvm_override);
@@ -199,9 +198,7 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr,
static bool svm_exception_injected(struct kvm_vcpu *vcpu)
{
- struct vcpu_svm *svm = to_svm(vcpu);
-
- return !(svm->vmcb->control.exit_int_info & SVM_EXITINTINFO_VALID);
+ return false;
}
static int is_external_interrupt(u32 info)
@@ -978,12 +975,9 @@ static int svm_guest_debug(struct kvm_vcpu *vcpu, struct kvm_guest_debug *dbg)
static int svm_get_irq(struct kvm_vcpu *vcpu)
{
- struct vcpu_svm *svm = to_svm(vcpu);
- u32 exit_int_info = svm->vmcb->control.exit_int_info;
-
- if (is_external_interrupt(exit_int_info))
- return exit_int_info & SVM_EVTINJ_VEC_MASK;
- return -1;
+ if (!vcpu->arch.interrupt.pending)
+ return -1;
+ return vcpu->arch.interrupt.nr;
}
static void load_host_msrs(struct kvm_vcpu *vcpu)
@@ -1090,17 +1084,8 @@ static void svm_set_dr(struct kvm_vcpu *vcpu, int dr, unsigned long value,
static int pf_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
{
- u32 exit_int_info = svm->vmcb->control.exit_int_info;
- struct kvm *kvm = svm->vcpu.kvm;
u64 fault_address;
u32 error_code;
- bool event_injection = false;
-
- if (!irqchip_in_kernel(kvm) &&
- is_external_interrupt(exit_int_info)) {
- event_injection = true;
- kvm_push_irq(&svm->vcpu, exit_int_info & SVM_EVTINJ_VEC_MASK);
- }
fault_address = svm->vmcb->control.exit_info_2;
error_code = svm->vmcb->control.exit_info_1;
@@ -1120,9 +1105,11 @@ static int pf_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
*/
if (npt_enabled)
svm_flush_tlb(&svm->vcpu);
-
- if (!npt_enabled && event_injection)
- kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
+ else {
+ if (svm->vcpu.arch.interrupt.pending ||
+ svm->vcpu.arch.exception.pending)
+ kvm_mmu_unprotect_page_virt(&svm->vcpu, fault_address);
+ }
return kvm_mmu_page_fault(&svm->vcpu, fault_address, error_code);
}
@@ -2196,7 +2183,6 @@ static int handle_exit(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
}
}
- kvm_reput_irq(svm);
if (svm->vmcb->control.exit_code == SVM_EXIT_ERR) {
kvm_run->exit_reason = KVM_EXIT_FAIL_ENTRY;
@@ -2259,13 +2245,19 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
((/*control->int_vector >> 4*/ 0xf) << V_INTR_PRIO_SHIFT);
}
+static void svm_queue_irq(struct vcpu_svm *svm, unsigned nr)
+{
+ svm->vmcb->control.event_inj = nr |
+ SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR;
+}
+
static void svm_set_irq(struct kvm_vcpu *vcpu, int irq)
{
struct vcpu_svm *svm = to_svm(vcpu);
nested_svm_intr(svm);
- svm_inject_irq(svm, irq);
+ svm_queue_irq(svm, irq);
}
static void update_cr8_intercept(struct kvm_vcpu *vcpu)
@@ -2298,98 +2290,47 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
(svm->vcpu.arch.hflags & HF_GIF_MASK);
}
-static void svm_intr_assist(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+static void enable_irq_window(struct kvm_vcpu *vcpu)
{
- struct vcpu_svm *svm = to_svm(vcpu);
- struct vmcb *vmcb = svm->vmcb;
- int intr_vector = -1;
-
- if ((vmcb->control.exit_int_info & SVM_EVTINJ_VALID) &&
- ((vmcb->control.exit_int_info & SVM_EVTINJ_TYPE_MASK) == 0)) {
- intr_vector = vmcb->control.exit_int_info &
- SVM_EVTINJ_VEC_MASK;
- vmcb->control.exit_int_info = 0;
- svm_inject_irq(svm, intr_vector);
- goto out;
- }
-
- if (vmcb->control.int_ctl & V_IRQ_MASK)
- goto out;
-
- if (!kvm_cpu_has_interrupt(vcpu))
- goto out;
-
- if (nested_svm_intr(svm))
- goto out;
-
- if (!(svm->vcpu.arch.hflags & HF_GIF_MASK))
- goto out;
-
- if (!(vmcb->save.rflags & X86_EFLAGS_IF) ||
- (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) ||
- (vmcb->control.event_inj & SVM_EVTINJ_VALID)) {
- /* unable to deliver irq, set pending irq */
- svm_set_vintr(svm);
- svm_inject_irq(svm, 0x0);
- goto out;
- }
- /* Okay, we can deliver the interrupt: grab it and update PIC state. */
- intr_vector = kvm_cpu_get_interrupt(vcpu);
- svm_inject_irq(svm, intr_vector);
-out:
- update_cr8_intercept(vcpu);
+ svm_set_vintr(to_svm(vcpu));
+ svm_inject_irq(to_svm(vcpu), 0x0);
}
-static void kvm_reput_irq(struct vcpu_svm *svm)
+static void svm_intr_inject(struct kvm_vcpu *vcpu)
{
- struct vmcb_control_area *control = &svm->vmcb->control;
-
- if ((control->int_ctl & V_IRQ_MASK)
- && !irqchip_in_kernel(svm->vcpu.kvm)) {
- control->int_ctl &= ~V_IRQ_MASK;
- kvm_push_irq(&svm->vcpu, control->int_vector);
+ /* try to reinject previous events if any */
+ if (vcpu->arch.interrupt.pending) {
+ svm_queue_irq(to_svm(vcpu), vcpu->arch.interrupt.nr);
+ return;
}
- svm->vcpu.arch.interrupt_window_open =
- !(control->int_state & SVM_INTERRUPT_SHADOW_MASK) &&
- (svm->vcpu.arch.hflags & HF_GIF_MASK);
-}
-
-static void svm_do_inject_vector(struct vcpu_svm *svm)
-{
- svm_inject_irq(svm, kvm_pop_irq(&svm->vcpu));
+ /* try to inject new event if pending */
+ if (kvm_cpu_has_interrupt(vcpu)) {
+ if (vcpu->arch.interrupt_window_open) {
+ kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
+ svm_queue_irq(to_svm(vcpu), vcpu->arch.interrupt.nr);
+ }
+ }
}
-static void do_interrupt_requests(struct kvm_vcpu *vcpu,
- struct kvm_run *kvm_run)
+static void svm_intr_assist(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
{
struct vcpu_svm *svm = to_svm(vcpu);
- struct vmcb_control_area *control = &svm->vmcb->control;
+ bool req_int_win = !irqchip_in_kernel(vcpu->kvm) &&
+ kvm_run->request_interrupt_window;
if (nested_svm_intr(svm))
- return;
+ goto out;
- svm->vcpu.arch.interrupt_window_open =
- (!(control->int_state & SVM_INTERRUPT_SHADOW_MASK) &&
- (svm->vmcb->save.rflags & X86_EFLAGS_IF) &&
- (svm->vcpu.arch.hflags & HF_GIF_MASK));
+ svm->vcpu.arch.interrupt_window_open = svm_interrupt_allowed(vcpu);
- if (svm->vcpu.arch.interrupt_window_open &&
- kvm_cpu_has_interrupt(&svm->vcpu))
- /*
- * If interrupts enabled, and not blocked by sti or mov ss. Good.
- */
- svm_do_inject_vector(svm);
+ svm_intr_inject(vcpu);
- /*
- * Interrupts blocked. Wait for unblock.
- */
- if (!svm->vcpu.arch.interrupt_window_open &&
- (kvm_cpu_has_interrupt(&svm->vcpu) ||
- kvm_run->request_interrupt_window))
- svm_set_vintr(svm);
- else
- svm_clear_vintr(svm);
+ if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
+ enable_irq_window(vcpu);
+
+out:
+ update_cr8_intercept(vcpu);
}
static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr)
@@ -2429,6 +2370,46 @@ static inline void sync_lapic_to_cr8(struct kvm_vcpu *vcpu)
svm->vmcb->control.int_ctl |= cr8 & V_TPR_MASK;
}
+static void svm_complete_interrupts(struct vcpu_svm *svm)
+{
+ u8 vector;
+ int type;
+ u32 exitintinfo = svm->vmcb->control.exit_int_info;
+
+ svm->vcpu.arch.nmi_injected = false;
+ kvm_clear_exception_queue(&svm->vcpu);
+ kvm_clear_interrupt_queue(&svm->vcpu);
+
+ if (!(exitintinfo & SVM_EXITINTINFO_VALID))
+ return;
+
+ vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK;
+ type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK;
+
+ switch (type) {
+ case SVM_EXITINTINFO_TYPE_NMI:
+ svm->vcpu.arch.nmi_injected = true;
+ break;
+ case SVM_EXITINTINFO_TYPE_EXEPT:
+ /* In case of software exception do not reinject an exception
+ vector, but re-execute and instruction instead */
+ if (vector == BP_VECTOR || vector == OF_VECTOR)
+ break;
+ if (exitintinfo & SVM_EXITINTINFO_VALID_ERR) {
+ u32 err = svm->vmcb->control.exit_int_info_err;
+ kvm_queue_exception_e(&svm->vcpu, vector, err);
+
+ } else
+ kvm_queue_exception(&svm->vcpu, vector);
+ break;
+ case SVM_EXITINTINFO_TYPE_INTR:
+ kvm_queue_interrupt(&svm->vcpu, vector);
+ break;
+ default:
+ break;
+ }
+}
+
#ifdef CONFIG_X86_64
#define R "r"
#else
@@ -2557,6 +2538,8 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
sync_cr8_to_lapic(vcpu);
svm->next_rip = 0;
+
+ svm_complete_interrupts(svm);
}
#undef R
@@ -2678,7 +2661,7 @@ static struct kvm_x86_ops svm_x86_ops = {
.queue_exception = svm_queue_exception,
.exception_injected = svm_exception_injected,
.inject_pending_irq = svm_intr_assist,
- .inject_pending_vectors = do_interrupt_requests,
+ .inject_pending_vectors = svm_intr_assist,
.interrupt_allowed = svm_interrupt_allowed,
.set_tss_addr = svm_set_tss_addr,
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
* [PATCH 46/46] KVM: Remove exception_injected() callback.
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
` (44 preceding siblings ...)
2009-05-20 11:18 ` [PATCH 45/46] KVM: SVM: Coalesce userspace/kernel irqchip interrupt injection logic Avi Kivity
@ 2009-05-20 11:18 ` Avi Kivity
45 siblings, 0 replies; 47+ messages in thread
From: Avi Kivity @ 2009-05-20 11:18 UTC (permalink / raw)
To: linux-kernel; +Cc: kvm
From: Gleb Natapov <gleb@redhat.com>
It always return false for VMX/SVM now.
Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/svm.c | 6 ------
arch/x86/kvm/vmx.c | 6 ------
arch/x86/kvm/x86.c | 2 --
4 files changed, 0 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5edae35..ea3741e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -516,7 +516,6 @@ struct kvm_x86_ops {
void (*set_irq)(struct kvm_vcpu *vcpu, int vec);
void (*queue_exception)(struct kvm_vcpu *vcpu, unsigned nr,
bool has_error_code, u32 error_code);
- bool (*exception_injected)(struct kvm_vcpu *vcpu);
void (*inject_pending_irq)(struct kvm_vcpu *vcpu, struct kvm_run *run);
void (*inject_pending_vectors)(struct kvm_vcpu *vcpu,
struct kvm_run *run);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index a80ffaa..8fa5a0e 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -196,11 +196,6 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr,
svm->vmcb->control.event_inj_err = error_code;
}
-static bool svm_exception_injected(struct kvm_vcpu *vcpu)
-{
- return false;
-}
-
static int is_external_interrupt(u32 info)
{
info &= SVM_EVTINJ_TYPE_MASK | SVM_EVTINJ_VALID;
@@ -2659,7 +2654,6 @@ static struct kvm_x86_ops svm_x86_ops = {
.get_irq = svm_get_irq,
.set_irq = svm_set_irq,
.queue_exception = svm_queue_exception,
- .exception_injected = svm_exception_injected,
.inject_pending_irq = svm_intr_assist,
.inject_pending_vectors = svm_intr_assist,
.interrupt_allowed = svm_interrupt_allowed,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9eb518f..3186fcf 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -789,11 +789,6 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu, unsigned nr,
vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, intr_info);
}
-static bool vmx_exception_injected(struct kvm_vcpu *vcpu)
-{
- return false;
-}
-
/*
* Swap MSR entry in host/guest MSR entry array.
*/
@@ -3697,7 +3692,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
.get_irq = vmx_get_irq,
.set_irq = vmx_inject_irq,
.queue_exception = vmx_queue_exception,
- .exception_injected = vmx_exception_injected,
.inject_pending_irq = vmx_intr_assist,
.inject_pending_vectors = vmx_intr_assist,
.interrupt_allowed = vmx_interrupt_allowed,
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 28919d2..2683d7e 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3233,8 +3233,6 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
profile_hit(KVM_PROFILING, (void *)rip);
}
- if (vcpu->arch.exception.pending && kvm_x86_ops->exception_injected(vcpu))
- vcpu->arch.exception.pending = false;
kvm_lapic_sync_from_vapic(vcpu);
--
1.6.0.6
^ permalink raw reply related [flat|nested] 47+ messages in thread
end of thread, other threads:[~2009-05-20 11:18 UTC | newest]
Thread overview: 47+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-20 11:17 [PATCH 00/46] KVM updates for the 2.6.31 merge window (batch 2/3) Avi Kivity
2009-05-20 11:17 ` [PATCH 01/46] KVM: VMX: Make flexpriority module parameter reflect hardware capability Avi Kivity
2009-05-20 11:17 ` [PATCH 02/46] KVM: VMX: Correct wrong vmcs field sizes Avi Kivity
2009-05-20 11:18 ` [PATCH 03/46] KVM: MMU: Fix comment in page_fault() Avi Kivity
2009-05-20 11:18 ` [PATCH 04/46] KVM: ia64: enable external interrupt in vmm Avi Kivity
2009-05-20 11:18 ` [PATCH 05/46] KVM: MMU: Emulate #PF error code of reserved bits violation Avi Kivity
2009-05-20 11:18 ` [PATCH 06/46] KVM: MMU: Use different shadows when EFER.NXE changes Avi Kivity
2009-05-20 11:18 ` [PATCH 07/46] KVM: remove pointless conditional before kfree() in lapic initialization Avi Kivity
2009-05-20 11:18 ` [PATCH 08/46] KVM: VMX: Clean up Flex Priority related Avi Kivity
2009-05-20 11:18 ` [PATCH 09/46] KVM: VMX: Fix feature testing Avi Kivity
2009-05-20 11:18 ` [PATCH 10/46] KVM: Use rsvd_bits_mask in load_pdptrs() Avi Kivity
2009-05-20 11:18 ` [PATCH 11/46] KVM: VMX: Fix handling of a fault during NMI unblocked due to IRET Avi Kivity
2009-05-20 11:18 ` [PATCH 12/46] KVM: VMX: Rewrite vmx_complete_interrupt()'s twisted maze of if() statements Avi Kivity
2009-05-20 11:18 ` [PATCH 13/46] KVM: VMX: Do not zero idt_vectoring_info in vmx_complete_interrupts() Avi Kivity
2009-05-20 11:18 ` [PATCH 14/46] KVM: Fix task switch back link handling Avi Kivity
2009-05-20 11:18 ` [PATCH 15/46] KVM: Fix unneeded instruction skipping during task switching Avi Kivity
2009-05-20 11:18 ` [PATCH 16/46] KVM: MMU: Discard reserved bits checking on PDE bit 7-8 Avi Kivity
2009-05-20 11:18 ` [PATCH 17/46] KVM: x86 emulator: fix call near emulation Avi Kivity
2009-05-20 11:18 ` [PATCH 18/46] KVM: ia64: make kvm depend on CONFIG_MODULES Avi Kivity
2009-05-20 11:18 ` [PATCH 19/46] KVM: PIT: fix count read and mode 0 handling Avi Kivity
2009-05-20 11:18 ` [PATCH 20/46] KVM: Make kvm header C++ friendly Avi Kivity
2009-05-20 11:18 ` [PATCH 21/46] KVM: MMU: remove global page optimization logic Avi Kivity
2009-05-20 11:18 ` [PATCH 22/46] KVM: x86 emulator: Add decoding of 16bit second immediate argument Avi Kivity
2009-05-20 11:18 ` [PATCH 23/46] KVM: x86 emulator: Add lcall decoding Avi Kivity
2009-05-20 11:18 ` [PATCH 24/46] KVM: x86 emulator: Complete ljmp decoding at decode stage Avi Kivity
2009-05-20 11:18 ` [PATCH 25/46] KVM: x86 emulator: Complete short/near jcc decoding in " Avi Kivity
2009-05-20 11:18 ` [PATCH 26/46] KVM: x86 emulator: Complete decoding of call near " Avi Kivity
2009-05-20 11:18 ` [PATCH 27/46] KVM: x86 emulator: Add unsigned byte immediate decode Avi Kivity
2009-05-20 11:18 ` [PATCH 28/46] KVM: x86 emulator: Completely decode in/out at decoding stage Avi Kivity
2009-05-20 11:18 ` [PATCH 29/46] KVM: x86 emulator: Decode soft interrupt instructions Avi Kivity
2009-05-20 11:18 ` [PATCH 30/46] KVM: x86 emulator: Add new mode of instruction emulation: skip Avi Kivity
2009-05-20 11:18 ` [PATCH 31/46] KVM: SVM: Skip instruction on a task switch only when appropriate Avi Kivity
2009-05-20 11:18 ` [PATCH 32/46] KVM: Replace kvmclock open-coded get_cpu_var() with the real thing Avi Kivity
2009-05-20 11:18 ` [PATCH 33/46] KVM: ia64: Don't hold slots_lock in guest mode Avi Kivity
2009-05-20 11:18 ` [PATCH 34/46] KVM: x86: check for cr3 validity in ioctl_set_sregs Avi Kivity
2009-05-20 11:18 ` [PATCH 35/46] KVM: ia64: Flush all TLBs once guest's memory mapping changes Avi Kivity
2009-05-20 11:18 ` [PATCH 36/46] KVM: ia64: remove empty function vti_vcpu_load() Avi Kivity
2009-05-20 11:18 ` [PATCH 37/46] KVM: ia64: restore irq state before calling kvm_vcpu_init Avi Kivity
2009-05-20 11:18 ` [PATCH 38/46] KVM: ia64: preserve int status through call to kvm_insert_vmm_mapping Avi Kivity
2009-05-20 11:18 ` [PATCH 39/46] KVM: ia64: ia64 vcpu_reset() do not call kmalloc() with irqs disabled Avi Kivity
2009-05-20 11:18 ` [PATCH 40/46] KVM: MMU: Fix auditing code Avi Kivity
2009-05-20 11:18 ` [PATCH 41/46] KVM: Make kvm_cpu_(has|get)_interrupt() work for userspace irqchip too Avi Kivity
2009-05-20 11:18 ` [PATCH 42/46] KVM: VMX: Consolidate userspace and kernel interrupt injection for VMX Avi Kivity
2009-05-20 11:18 ` [PATCH 43/46] KVM: VMX: Cleanup vmx_intr_assist() Avi Kivity
2009-05-20 11:18 ` [PATCH 44/46] KVM: Use kvm_arch_interrupt_allowed() instead of checking interrupt_window_open directly Avi Kivity
2009-05-20 11:18 ` [PATCH 45/46] KVM: SVM: Coalesce userspace/kernel irqchip interrupt injection logic Avi Kivity
2009-05-20 11:18 ` [PATCH 46/46] KVM: Remove exception_injected() callback Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).