* [PATCH 0/8] KVM updates for 2.6.20-rc2
@ 2006-12-28 10:07 Avi Kivity
2006-12-28 10:08 ` [PATCH 1/8] KVM: Use boot_cpu_data instead of current_cpu_data Avi Kivity
` (4 more replies)
0 siblings, 5 replies; 36+ messages in thread
From: Avi Kivity @ 2006-12-28 10:07 UTC (permalink / raw)
To: kvm-devel; +Cc: linux-kernel, Andrew Morton, Ingo Molnar
Compatibility and stabilization fixes, many centered around msrs, but a
few other corner cases as well.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 36+ messages in thread* [PATCH 1/8] KVM: Use boot_cpu_data instead of current_cpu_data 2006-12-28 10:07 [PATCH 0/8] KVM updates for 2.6.20-rc2 Avi Kivity @ 2006-12-28 10:08 ` Avi Kivity 2006-12-28 10:09 ` [PATCH 2/8] KVM: Simplify is_long_mode() Avi Kivity ` (3 subsequent siblings) 4 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:08 UTC (permalink / raw) To: kvm-devel; +Cc: linux-kernel, akpm, mingo current_cpu_data invokes smp_processor_id(), which is inadvisable when preemption is enabled. Switch to boot_cpu_data instead. Resolves sourceforge bug 1621401. Signed-off-by: Avi Kivity <avi@qumranet.com> Index: linux-2.6/drivers/kvm/svm.c =================================================================== --- linux-2.6.orig/drivers/kvm/svm.c +++ linux-2.6/drivers/kvm/svm.c @@ -246,7 +246,7 @@ static int has_svm(void) { uint32_t eax, ebx, ecx, edx; - if (current_cpu_data.x86_vendor != X86_VENDOR_AMD) { + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { printk(KERN_INFO "has_svm: not amd\n"); return 0; } ^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 2/8] KVM: Simplify is_long_mode() 2006-12-28 10:07 [PATCH 0/8] KVM updates for 2.6.20-rc2 Avi Kivity 2006-12-28 10:08 ` [PATCH 1/8] KVM: Use boot_cpu_data instead of current_cpu_data Avi Kivity @ 2006-12-28 10:09 ` Avi Kivity 2006-12-28 10:12 ` [PATCH 5/8] KVM: Move common msr handling to arch independent code Avi Kivity ` (2 subsequent siblings) 4 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:09 UTC (permalink / raw) To: kvm-devel; +Cc: linux-kernel, akpm, mingo Instead of doing tricky stuff with the arch dependent virtualization registers, take a peek at the guest's efer. This simlifies some code, and fixes some confusion in the mmu branch. Signed-off-by: Avi Kivity <avi@qumranet.com> Index: linux-2.6/drivers/kvm/mmu.c =================================================================== --- linux-2.6.orig/drivers/kvm/mmu.c +++ linux-2.6/drivers/kvm/mmu.c @@ -578,7 +578,7 @@ static int init_kvm_mmu(struct kvm_vcpu if (!is_paging(vcpu)) return nonpaging_init_context(vcpu); - else if (kvm_arch_ops->is_long_mode(vcpu)) + else if (is_long_mode(vcpu)) return paging64_init_context(vcpu); else if (is_pae(vcpu)) return paging32E_init_context(vcpu); Index: linux-2.6/drivers/kvm/kvm_main.c =================================================================== --- linux-2.6.orig/drivers/kvm/kvm_main.c +++ linux-2.6/drivers/kvm/kvm_main.c @@ -398,7 +398,7 @@ void set_cr4(struct kvm_vcpu *vcpu, unsi return; } - if (kvm_arch_ops->is_long_mode(vcpu)) { + if (is_long_mode(vcpu)) { if (!(cr4 & CR4_PAE_MASK)) { printk(KERN_DEBUG "set_cr4: #GP, clearing PAE while " "in long mode\n"); @@ -425,7 +425,7 @@ EXPORT_SYMBOL_GPL(set_cr4); void set_cr3(struct kvm_vcpu *vcpu, unsigned long cr3) { - if (kvm_arch_ops->is_long_mode(vcpu)) { + if (is_long_mode(vcpu)) { if ( cr3 & CR3_L_MODE_RESEVED_BITS) { printk(KERN_DEBUG "set_cr3: #GP, reserved bits\n"); inject_gp(vcpu); Index: linux-2.6/drivers/kvm/kvm.h =================================================================== --- linux-2.6.orig/drivers/kvm/kvm.h +++ linux-2.6/drivers/kvm/kvm.h @@ -278,7 +278,6 @@ struct kvm_arch_ops { struct kvm_segment *var, int seg); void (*set_segment)(struct kvm_vcpu *vcpu, struct kvm_segment *var, int seg); - int (*is_long_mode)(struct kvm_vcpu *vcpu); void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l); void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); void (*set_cr0_no_modeswitch)(struct kvm_vcpu *vcpu, @@ -403,6 +402,15 @@ static inline struct page *_gfn_to_page( return (slot) ? slot->phys_mem[gfn - slot->base_gfn] : NULL; } +static inline int is_long_mode(struct kvm_vcpu *vcpu) +{ +#ifdef CONFIG_X86_64 + return vcpu->shadow_efer & EFER_LME; +#else + return 0; +#endif +} + static inline int is_pae(struct kvm_vcpu *vcpu) { return vcpu->cr4 & CR4_PAE_MASK; Index: linux-2.6/drivers/kvm/svm.c =================================================================== --- linux-2.6.orig/drivers/kvm/svm.c +++ linux-2.6/drivers/kvm/svm.c @@ -166,11 +166,6 @@ static inline void write_dr7(unsigned lo asm volatile ("mov %0, %%dr7" :: "r" (val)); } -static inline int svm_is_long_mode(struct kvm_vcpu *vcpu) -{ - return vcpu->svm->vmcb->save.efer & KVM_EFER_LMA; -} - static inline void force_new_asid(struct kvm_vcpu *vcpu) { vcpu->svm->asid_generation--; @@ -1609,7 +1604,6 @@ static struct kvm_arch_ops svm_arch_ops .get_segment_base = svm_get_segment_base, .get_segment = svm_get_segment, .set_segment = svm_set_segment, - .is_long_mode = svm_is_long_mode, .get_cs_db_l_bits = svm_get_cs_db_l_bits, .set_cr0 = svm_set_cr0, .set_cr0_no_modeswitch = svm_set_cr0, Index: linux-2.6/drivers/kvm/paging_tmpl.h =================================================================== --- linux-2.6.orig/drivers/kvm/paging_tmpl.h +++ linux-2.6/drivers/kvm/paging_tmpl.h @@ -68,7 +68,7 @@ static void FNAME(init_walker)(struct gu hpa = safe_gpa_to_hpa(vcpu, vcpu->cr3 & PT64_BASE_ADDR_MASK); walker->table = kmap_atomic(pfn_to_page(hpa >> PAGE_SHIFT), KM_USER0); - ASSERT((!kvm_arch_ops->is_long_mode(vcpu) && is_pae(vcpu)) || + ASSERT((!is_long_mode(vcpu) && is_pae(vcpu)) || (vcpu->cr3 & ~(PAGE_MASK | CR3_FLAGS_MASK)) == 0); walker->table = (pt_element_t *)( (unsigned long)walker->table | @@ -131,7 +131,7 @@ static pt_element_t *FNAME(fetch_guest)( (walker->table[index] & PT_PAGE_SIZE_MASK) && (PTTYPE == 64 || is_pse(vcpu)))) return &walker->table[index]; - if (walker->level != 3 || kvm_arch_ops->is_long_mode(vcpu)) + if (walker->level != 3 || is_long_mode(vcpu)) walker->inherited_ar &= walker->table[index]; paddr = safe_gpa_to_hpa(vcpu, walker->table[index] & PT_BASE_ADDR_MASK); kunmap_atomic(walker->table, KM_USER0); Index: linux-2.6/drivers/kvm/vmx.c =================================================================== --- linux-2.6.orig/drivers/kvm/vmx.c +++ linux-2.6/drivers/kvm/vmx.c @@ -900,11 +900,6 @@ static void vmx_set_segment(struct kvm_v vmcs_write32(sf->ar_bytes, ar); } -static int vmx_is_long_mode(struct kvm_vcpu *vcpu) -{ - return vmcs_read32(VM_ENTRY_CONTROLS) & VM_ENTRY_CONTROLS_IA32E_MASK; -} - static void vmx_get_cs_db_l_bits(struct kvm_vcpu *vcpu, int *db, int *l) { u32 ar = vmcs_read32(GUEST_CS_AR_BYTES); @@ -1975,7 +1970,6 @@ static struct kvm_arch_ops vmx_arch_ops .get_segment_base = vmx_get_segment_base, .get_segment = vmx_get_segment, .set_segment = vmx_set_segment, - .is_long_mode = vmx_is_long_mode, .get_cs_db_l_bits = vmx_get_cs_db_l_bits, .set_cr0 = vmx_set_cr0, .set_cr0_no_modeswitch = vmx_set_cr0_no_modeswitch, ^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 5/8] KVM: Move common msr handling to arch independent code 2006-12-28 10:07 [PATCH 0/8] KVM updates for 2.6.20-rc2 Avi Kivity 2006-12-28 10:08 ` [PATCH 1/8] KVM: Use boot_cpu_data instead of current_cpu_data Avi Kivity 2006-12-28 10:09 ` [PATCH 2/8] KVM: Simplify is_long_mode() Avi Kivity @ 2006-12-28 10:12 ` Avi Kivity 2006-12-28 10:13 ` [PATCH 6/8] KVM: More msr misery Avi Kivity [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 4 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:12 UTC (permalink / raw) To: kvm-devel; +Cc: linux-kernel, akpm, mingo Signed-off-by: Avi Kivity <avi@qumranet.com> Index: linux-2.6/drivers/kvm/kvm_main.c =================================================================== --- linux-2.6.orig/drivers/kvm/kvm_main.c +++ linux-2.6/drivers/kvm/kvm_main.c @@ -1103,6 +1103,47 @@ void realmode_set_cr(struct kvm_vcpu *vc } } +int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata) +{ + u64 data; + + switch (msr) { + case 0xc0010010: /* SYSCFG */ + case 0xc0010015: /* HWCR */ + case MSR_IA32_PLATFORM_ID: + case MSR_IA32_P5_MC_ADDR: + case MSR_IA32_P5_MC_TYPE: + case MSR_IA32_MC0_CTL: + case MSR_IA32_MCG_STATUS: + case MSR_IA32_MCG_CAP: + case MSR_IA32_MC0_MISC: + case MSR_IA32_MC0_MISC+4: + case MSR_IA32_MC0_MISC+8: + case MSR_IA32_MC0_MISC+12: + case MSR_IA32_MC0_MISC+16: + case MSR_IA32_UCODE_REV: + /* MTRR registers */ + case 0xfe: + case 0x200 ... 0x2ff: + data = 0; + break; + case MSR_IA32_APICBASE: + data = vcpu->apic_base; + break; +#ifdef CONFIG_X86_64 + case MSR_EFER: + data = vcpu->shadow_efer; + break; +#endif + default: + printk(KERN_ERR "kvm: unhandled rdmsr: 0x%x\n", msr); + return 1; + } + *pdata = data; + return 0; +} +EXPORT_SYMBOL_GPL(kvm_get_msr_common); + /* * Reads an msr value (of 'msr_index') into 'pdata'. * Returns 0 on success, non-0 otherwise. @@ -1115,7 +1156,7 @@ static int get_msr(struct kvm_vcpu *vcpu #ifdef CONFIG_X86_64 -void set_efer(struct kvm_vcpu *vcpu, u64 efer) +static void set_efer(struct kvm_vcpu *vcpu, u64 efer) { if (efer & EFER_RESERVED_BITS) { printk(KERN_DEBUG "set_efer: 0x%llx #GP, reserved bits\n", @@ -1138,10 +1179,36 @@ void set_efer(struct kvm_vcpu *vcpu, u64 vcpu->shadow_efer = efer; } -EXPORT_SYMBOL_GPL(set_efer); #endif +int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data) +{ + switch (msr) { +#ifdef CONFIG_X86_64 + case MSR_EFER: + set_efer(vcpu, data); + break; +#endif + case MSR_IA32_MC0_STATUS: + printk(KERN_WARNING "%s: MSR_IA32_MC0_STATUS 0x%llx, nop\n", + __FUNCTION__, data); + break; + case MSR_IA32_UCODE_REV: + case MSR_IA32_UCODE_WRITE: + case 0x200 ... 0x2ff: /* MTRRs */ + break; + case MSR_IA32_APICBASE: + vcpu->apic_base = data; + break; + default: + printk(KERN_ERR "kvm: unhandled wrmsr: 0x%x\n", msr); + return 1; + } + return 0; +} +EXPORT_SYMBOL_GPL(kvm_set_msr_common); + /* * Writes msr value into into the appropriate "register". * Returns 0 on success, non-0 otherwise. Index: linux-2.6/drivers/kvm/kvm.h =================================================================== --- linux-2.6.orig/drivers/kvm/kvm.h +++ linux-2.6/drivers/kvm/kvm.h @@ -374,9 +374,8 @@ void set_cr4(struct kvm_vcpu *vcpu, unsi void set_cr8(struct kvm_vcpu *vcpu, unsigned long cr0); void lmsw(struct kvm_vcpu *vcpu, unsigned long msw); -#ifdef CONFIG_X86_64 -void set_efer(struct kvm_vcpu *vcpu, u64 efer); -#endif +int kvm_get_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata); +int kvm_set_msr_common(struct kvm_vcpu *vcpu, u32 msr, u64 data); void fx_init(struct kvm_vcpu *vcpu); Index: linux-2.6/drivers/kvm/svm.c =================================================================== --- linux-2.6.orig/drivers/kvm/svm.c +++ linux-2.6/drivers/kvm/svm.c @@ -1068,25 +1068,6 @@ static int emulate_on_interception(struc static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) { switch (ecx) { - case 0xc0010010: /* SYSCFG */ - case 0xc0010015: /* HWCR */ - case MSR_IA32_PLATFORM_ID: - case MSR_IA32_P5_MC_ADDR: - case MSR_IA32_P5_MC_TYPE: - case MSR_IA32_MC0_CTL: - case MSR_IA32_MCG_STATUS: - case MSR_IA32_MCG_CAP: - case MSR_IA32_MC0_MISC: - case MSR_IA32_MC0_MISC+4: - case MSR_IA32_MC0_MISC+8: - case MSR_IA32_MC0_MISC+12: - case MSR_IA32_MC0_MISC+16: - case MSR_IA32_UCODE_REV: - /* MTRR registers */ - case 0xfe: - case 0x200 ... 0x2ff: - *data = 0; - break; case MSR_IA32_TIME_STAMP_COUNTER: { u64 tsc; @@ -1094,12 +1075,6 @@ static int svm_get_msr(struct kvm_vcpu * *data = vcpu->svm->vmcb->control.tsc_offset + tsc; break; } - case MSR_EFER: - *data = vcpu->shadow_efer; - break; - case MSR_IA32_APICBASE: - *data = vcpu->apic_base; - break; case MSR_K6_STAR: *data = vcpu->svm->vmcb->save.star; break; @@ -1127,8 +1102,7 @@ static int svm_get_msr(struct kvm_vcpu * *data = vcpu->svm->vmcb->save.sysenter_esp; break; default: - printk(KERN_ERR "kvm: unhandled rdmsr: 0x%x\n", ecx); - return 1; + return kvm_get_msr_common(vcpu, ecx, data); } return 0; } @@ -1152,15 +1126,6 @@ static int rdmsr_interception(struct kvm static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data) { switch (ecx) { -#ifdef CONFIG_X86_64 - case MSR_EFER: - set_efer(vcpu, data); - break; -#endif - case MSR_IA32_MC0_STATUS: - printk(KERN_WARNING "%s: MSR_IA32_MC0_STATUS 0x%llx, nop\n" - , __FUNCTION__, data); - break; case MSR_IA32_TIME_STAMP_COUNTER: { u64 tsc; @@ -1168,13 +1133,6 @@ static int svm_set_msr(struct kvm_vcpu * vcpu->svm->vmcb->control.tsc_offset = data - tsc; break; } - case MSR_IA32_UCODE_REV: - case MSR_IA32_UCODE_WRITE: - case 0x200 ... 0x2ff: /* MTRRs */ - break; - case MSR_IA32_APICBASE: - vcpu->apic_base = data; - break; case MSR_K6_STAR: vcpu->svm->vmcb->save.star = data; break; @@ -1202,8 +1160,7 @@ static int svm_set_msr(struct kvm_vcpu * vcpu->svm->vmcb->save.sysenter_esp = data; break; default: - printk(KERN_ERR "kvm: unhandled wrmsr: %x\n", ecx); - return 1; + return kvm_set_msr_common(vcpu, ecx, data); } return 0; } Index: linux-2.6/drivers/kvm/vmx.c =================================================================== --- linux-2.6.orig/drivers/kvm/vmx.c +++ linux-2.6/drivers/kvm/vmx.c @@ -344,8 +344,7 @@ static int vmx_get_msr(struct kvm_vcpu * data = vmcs_readl(GUEST_GS_BASE); break; case MSR_EFER: - data = vcpu->shadow_efer; - break; + return kvm_get_msr_common(vcpu, msr_index, pdata); #endif case MSR_IA32_TIME_STAMP_COUNTER: data = guest_read_tsc(); @@ -359,36 +358,13 @@ static int vmx_get_msr(struct kvm_vcpu * case MSR_IA32_SYSENTER_ESP: data = vmcs_read32(GUEST_SYSENTER_ESP); break; - case 0xc0010010: /* SYSCFG */ - case 0xc0010015: /* HWCR */ - case MSR_IA32_PLATFORM_ID: - case MSR_IA32_P5_MC_ADDR: - case MSR_IA32_P5_MC_TYPE: - case MSR_IA32_MC0_CTL: - case MSR_IA32_MCG_STATUS: - case MSR_IA32_MCG_CAP: - case MSR_IA32_MC0_MISC: - case MSR_IA32_MC0_MISC+4: - case MSR_IA32_MC0_MISC+8: - case MSR_IA32_MC0_MISC+12: - case MSR_IA32_MC0_MISC+16: - case MSR_IA32_UCODE_REV: - /* MTRR registers */ - case 0xfe: - case 0x200 ... 0x2ff: - data = 0; - break; - case MSR_IA32_APICBASE: - data = vcpu->apic_base; - break; default: msr = find_msr_entry(vcpu, msr_index); - if (!msr) { - printk(KERN_ERR "kvm: unhandled rdmsr: %x\n", msr_index); - return 1; + if (msr) { + data = msr->data; + break; } - data = msr->data; - break; + return kvm_get_msr_common(vcpu, msr_index, pdata); } *pdata = data; @@ -405,6 +381,8 @@ static int vmx_set_msr(struct kvm_vcpu * struct vmx_msr_entry *msr; switch (msr_index) { #ifdef CONFIG_X86_64 + case MSR_EFER: + return kvm_set_msr_common(vcpu, msr_index, data); case MSR_FS_BASE: vmcs_writel(GUEST_FS_BASE, data); break; @@ -421,32 +399,17 @@ static int vmx_set_msr(struct kvm_vcpu * case MSR_IA32_SYSENTER_ESP: vmcs_write32(GUEST_SYSENTER_ESP, data); break; -#ifdef __x86_64 - case MSR_EFER: - set_efer(vcpu, data); - break; - case MSR_IA32_MC0_STATUS: - printk(KERN_WARNING "%s: MSR_IA32_MC0_STATUS 0x%llx, nop\n" - , __FUNCTION__, data); - break; -#endif case MSR_IA32_TIME_STAMP_COUNTER: { guest_write_tsc(data); break; } - case MSR_IA32_UCODE_REV: - case MSR_IA32_UCODE_WRITE: - case 0x200 ... 0x2ff: /* MTRRs */ - break; - case MSR_IA32_APICBASE: - vcpu->apic_base = data; - break; default: msr = find_msr_entry(vcpu, msr_index); - if (!msr) { - printk(KERN_ERR "kvm: unhandled wrmsr: 0x%x\n", msr_index); - return 1; + if (msr) { + msr->data = data; + break; } + return kvm_set_msr_common(vcpu, msr_index, data); msr->data = data; break; } ^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 6/8] KVM: More msr misery 2006-12-28 10:07 [PATCH 0/8] KVM updates for 2.6.20-rc2 Avi Kivity ` (2 preceding siblings ...) 2006-12-28 10:12 ` [PATCH 5/8] KVM: Move common msr handling to arch independent code Avi Kivity @ 2006-12-28 10:13 ` Avi Kivity [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 4 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:13 UTC (permalink / raw) To: kvm-devel; +Cc: linux-kernel, akpm, mingo These msrs are referenced by benchmarking software when pretending to be an Intel cpu. Signed-off-by: Avi Kivity <avi@qumranet.com> Index: linux-2.6/drivers/kvm/kvm_main.c =================================================================== --- linux-2.6.orig/drivers/kvm/kvm_main.c +++ linux-2.6/drivers/kvm/kvm_main.c @@ -1122,11 +1122,15 @@ int kvm_get_msr_common(struct kvm_vcpu * case MSR_IA32_MC0_MISC+12: case MSR_IA32_MC0_MISC+16: case MSR_IA32_UCODE_REV: + case MSR_IA32_PERF_STATUS: /* MTRR registers */ case 0xfe: case 0x200 ... 0x2ff: data = 0; break; + case 0xcd: /* fsb frequency */ + data = 3; + break; case MSR_IA32_APICBASE: data = vcpu->apic_base; break; ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* [PATCH 3/8] KVM: Initialize kvm_arch_ops on unload [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 10:10 ` Avi Kivity 2006-12-28 10:11 ` [PATCH 4/8] KVM: Implement a few system configuration msrs Avi Kivity ` (4 subsequent siblings) 5 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:10 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Cc: akpm-3NddpPZAyC0, linux-kernel-u79uwXL29TY76Z2rM5mHXA From: Yoshimi Ichiyanagi <ichiyanagi.yoshimi-Zyj7fXuS5i5L9jVzuh4AOg@public.gmane.org> The latest version of kvm doesn't initialize kvm_arch_ops in kvm_init(), which causes an error with the following sequence. 1. Load the supported arch's module. 2. Load the unsupported arch's module.^[$B!!^[(B(loading error) 3. Unload the unsupported arch's module. You'll get the following error message after step 3. "BUG: unable to handle to handle kernel paging request at virtual address xxxxxxxx" The problem here is that the unsupported arch's module overwrites kvm_arch_ops of the supported arch's module at step 2. This patch initializes kvm_arch_ops upon loading architecture specific kvm module, and prevents overwriting kvm_arch_ops when kvm_arch_ops is already set correctly. Signed-off-by: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> Index: linux-2.6/drivers/kvm/kvm_main.c =================================================================== --- linux-2.6.orig/drivers/kvm/kvm_main.c +++ linux-2.6/drivers/kvm/kvm_main.c @@ -1865,6 +1865,11 @@ int kvm_init_arch(struct kvm_arch_ops *o { int r; + if (kvm_arch_ops) { + printk(KERN_ERR "kvm: already loaded the other module\n"); + return -EEXIST; + } + kvm_arch_ops = ops; if (!kvm_arch_ops->cpu_has_kvm_support()) { @@ -1907,6 +1912,7 @@ void kvm_exit_arch(void) unregister_reboot_notifier(&kvm_reboot_notifier); on_each_cpu(kvm_arch_ops->hardware_disable, 0, 0, 1); kvm_arch_ops->hardware_unsetup(); + kvm_arch_ops = NULL; } static __init int kvm_init(void) ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 4/8] KVM: Implement a few system configuration msrs [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 2006-12-28 10:10 ` [PATCH 3/8] KVM: Initialize kvm_arch_ops on unload Avi Kivity @ 2006-12-28 10:11 ` Avi Kivity [not found] ` <20061228101117.65A392500F7-LjA0eNSCdXrQnzwC+xcbyw@public.gmane.org> 2006-12-28 10:14 ` [PATCH 7/8] KVM: Rename some msrs Avi Kivity ` (3 subsequent siblings) 5 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:11 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Cc: akpm-3NddpPZAyC0, linux-kernel-u79uwXL29TY76Z2rM5mHXA Resolves sourceforge bug 1622229 (guest crashes running benchmark software). Signed-off-by: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> Index: linux-2.6/drivers/kvm/svm.c =================================================================== --- linux-2.6.orig/drivers/kvm/svm.c +++ linux-2.6/drivers/kvm/svm.c @@ -1068,6 +1068,9 @@ static int emulate_on_interception(struc static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) { switch (ecx) { + case 0xc0010010: /* SYSCFG */ + case 0xc0010015: /* HWCR */ + case MSR_IA32_PLATFORM_ID: case MSR_IA32_P5_MC_ADDR: case MSR_IA32_P5_MC_TYPE: case MSR_IA32_MC0_CTL: Index: linux-2.6/drivers/kvm/vmx.c =================================================================== --- linux-2.6.orig/drivers/kvm/vmx.c +++ linux-2.6/drivers/kvm/vmx.c @@ -359,6 +359,9 @@ static int vmx_get_msr(struct kvm_vcpu * case MSR_IA32_SYSENTER_ESP: data = vmcs_read32(GUEST_SYSENTER_ESP); break; + case 0xc0010010: /* SYSCFG */ + case 0xc0010015: /* HWCR */ + case MSR_IA32_PLATFORM_ID: case MSR_IA32_P5_MC_ADDR: case MSR_IA32_P5_MC_TYPE: case MSR_IA32_MC0_CTL: ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228101117.65A392500F7-LjA0eNSCdXrQnzwC+xcbyw@public.gmane.org>]
* Re: [PATCH 4/8] KVM: Implement a few system configuration msrs [not found] ` <20061228101117.65A392500F7-LjA0eNSCdXrQnzwC+xcbyw@public.gmane.org> @ 2007-01-01 0:07 ` Ingo Oeser [not found] ` <200701010107.18008.ioe-lkml-MFZNYGWX9eyELgA04lAiVw@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Oeser @ 2007-01-01 0:07 UTC (permalink / raw) To: Avi Kivity Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, akpm-3NddpPZAyC0, linux-kernel-u79uwXL29TY76Z2rM5mHXA Hi, On Thursday, 28. December 2006 11:11, Avi Kivity wrote: > Index: linux-2.6/drivers/kvm/svm.c > =================================================================== > --- linux-2.6.orig/drivers/kvm/svm.c > +++ linux-2.6/drivers/kvm/svm.c > @@ -1068,6 +1068,9 @@ static int emulate_on_interception(struc > static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) > { > switch (ecx) { > + case 0xc0010010: /* SYSCFG */ > + case 0xc0010015: /* HWCR */ > + case MSR_IA32_PLATFORM_ID: > case MSR_IA32_P5_MC_ADDR: > case MSR_IA32_P5_MC_TYPE: > case MSR_IA32_MC0_CTL: What about just defining constants for these? Then you can rip out these comments. Same for linux-2.6/drivers/kvm/vmx.c Regards Ingo Oeser ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <200701010107.18008.ioe-lkml-MFZNYGWX9eyELgA04lAiVw@public.gmane.org>]
* Re: [PATCH 4/8] KVM: Implement a few system configuration msrs [not found] ` <200701010107.18008.ioe-lkml-MFZNYGWX9eyELgA04lAiVw@public.gmane.org> @ 2007-01-01 8:20 ` Avi Kivity 0 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2007-01-01 8:20 UTC (permalink / raw) To: Ingo Oeser Cc: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f, akpm-3NddpPZAyC0, linux-kernel-u79uwXL29TY76Z2rM5mHXA Ingo Oeser wrote: > Hi, > > On Thursday, 28. December 2006 11:11, Avi Kivity wrote: > >> Index: linux-2.6/drivers/kvm/svm.c >> =================================================================== >> --- linux-2.6.orig/drivers/kvm/svm.c >> +++ linux-2.6/drivers/kvm/svm.c >> @@ -1068,6 +1068,9 @@ static int emulate_on_interception(struc >> static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) >> { >> switch (ecx) { >> + case 0xc0010010: /* SYSCFG */ >> + case 0xc0010015: /* HWCR */ >> + case MSR_IA32_PLATFORM_ID: >> case MSR_IA32_P5_MC_ADDR: >> case MSR_IA32_P5_MC_TYPE: >> case MSR_IA32_MC0_CTL: >> > > What about just defining constants for these? > Then you can rip out these comments. > > Same for linux-2.6/drivers/kvm/vmx.c > Yes, there are a few more of these as well. I'll clean them up once things quiet down a bit. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 7/8] KVM: Rename some msrs [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 2006-12-28 10:10 ` [PATCH 3/8] KVM: Initialize kvm_arch_ops on unload Avi Kivity 2006-12-28 10:11 ` [PATCH 4/8] KVM: Implement a few system configuration msrs Avi Kivity @ 2006-12-28 10:14 ` Avi Kivity 2006-12-28 10:15 ` [PATCH 8/8] KVM: Fix oops on oom Avi Kivity ` (2 subsequent siblings) 5 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:14 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Cc: akpm-3NddpPZAyC0, linux-kernel-u79uwXL29TY76Z2rM5mHXA From: Nguyen Anh Quynh <aquynh-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> No need to append _MSR to msr names, a prefix should suffice. Signed-off-by: Nguyen Anh Quynh <aquynh-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> Signed-off-by: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> Index: linux-2.6/drivers/kvm/vmx.c =================================================================== --- linux-2.6.orig/drivers/kvm/vmx.c +++ linux-2.6/drivers/kvm/vmx.c @@ -26,7 +26,6 @@ #include "segment_descriptor.h" -#define MSR_IA32_FEATURE_CONTROL 0x03a MODULE_AUTHOR("Qumranet"); MODULE_LICENSE("GPL"); @@ -519,11 +518,11 @@ static __init void setup_vmcs_descriptor { u32 vmx_msr_low, vmx_msr_high; - rdmsr(MSR_IA32_VMX_BASIC_MSR, vmx_msr_low, vmx_msr_high); + rdmsr(MSR_IA32_VMX_BASIC, vmx_msr_low, vmx_msr_high); vmcs_descriptor.size = vmx_msr_high & 0x1fff; vmcs_descriptor.order = get_order(vmcs_descriptor.size); vmcs_descriptor.revision_id = vmx_msr_low; -}; +} static struct vmcs *alloc_vmcs_cpu(int cpu) { @@ -1039,12 +1038,12 @@ static int vmx_vcpu_setup(struct kvm_vcp vmcs_write64(GUEST_IA32_DEBUGCTL, 0); /* Control */ - vmcs_write32_fixedbits(MSR_IA32_VMX_PINBASED_CTLS_MSR, + vmcs_write32_fixedbits(MSR_IA32_VMX_PINBASED_CTLS, PIN_BASED_VM_EXEC_CONTROL, PIN_BASED_EXT_INTR_MASK /* 20.6.1 */ | PIN_BASED_NMI_EXITING /* 20.6.1 */ ); - vmcs_write32_fixedbits(MSR_IA32_VMX_PROCBASED_CTLS_MSR, + vmcs_write32_fixedbits(MSR_IA32_VMX_PROCBASED_CTLS, CPU_BASED_VM_EXEC_CONTROL, CPU_BASED_HLT_EXITING /* 20.6.2 */ | CPU_BASED_CR8_LOAD_EXITING /* 20.6.2 */ @@ -1127,7 +1126,7 @@ static int vmx_vcpu_setup(struct kvm_vcp virt_to_phys(vcpu->guest_msrs + NR_BAD_MSRS)); vmcs_writel(VM_EXIT_MSR_LOAD_ADDR, virt_to_phys(vcpu->host_msrs + NR_BAD_MSRS)); - vmcs_write32_fixedbits(MSR_IA32_VMX_EXIT_CTLS_MSR, VM_EXIT_CONTROLS, + vmcs_write32_fixedbits(MSR_IA32_VMX_EXIT_CTLS, VM_EXIT_CONTROLS, (HOST_IS_64 << 9)); /* 22.2,1, 20.7.1 */ vmcs_write32(VM_EXIT_MSR_STORE_COUNT, nr_good_msrs); /* 22.2.2 */ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, nr_good_msrs); /* 22.2.2 */ @@ -1135,7 +1134,7 @@ static int vmx_vcpu_setup(struct kvm_vcp /* 22.2.1, 20.8.1 */ - vmcs_write32_fixedbits(MSR_IA32_VMX_ENTRY_CTLS_MSR, + vmcs_write32_fixedbits(MSR_IA32_VMX_ENTRY_CTLS, VM_ENTRY_CONTROLS, 0); vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0); /* 22.2.1 */ Index: linux-2.6/drivers/kvm/vmx.h =================================================================== --- linux-2.6.orig/drivers/kvm/vmx.h +++ linux-2.6/drivers/kvm/vmx.h @@ -286,11 +286,11 @@ enum vmcs_field { #define CR4_VMXE 0x2000 -#define MSR_IA32_VMX_BASIC_MSR 0x480 +#define MSR_IA32_VMX_BASIC 0x480 #define MSR_IA32_FEATURE_CONTROL 0x03a -#define MSR_IA32_VMX_PINBASED_CTLS_MSR 0x481 -#define MSR_IA32_VMX_PROCBASED_CTLS_MSR 0x482 -#define MSR_IA32_VMX_EXIT_CTLS_MSR 0x483 -#define MSR_IA32_VMX_ENTRY_CTLS_MSR 0x484 +#define MSR_IA32_VMX_PINBASED_CTLS 0x481 +#define MSR_IA32_VMX_PROCBASED_CTLS 0x482 +#define MSR_IA32_VMX_EXIT_CTLS 0x483 +#define MSR_IA32_VMX_ENTRY_CTLS 0x484 #endif ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* [PATCH 8/8] KVM: Fix oops on oom [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> ` (2 preceding siblings ...) 2006-12-28 10:14 ` [PATCH 7/8] KVM: Rename some msrs Avi Kivity @ 2006-12-28 10:15 ` Avi Kivity 2006-12-28 10:33 ` [PATCH 0/8] KVM updates for 2.6.20-rc2 Ingo Molnar 2006-12-28 12:42 ` [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() Ingo Molnar 5 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 10:15 UTC (permalink / raw) To: kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f Cc: akpm-3NddpPZAyC0, linux-kernel-u79uwXL29TY76Z2rM5mHXA __free_page() doesn't like a NULL argument, so check before calling it. A NULL can only happen if memory is exhausted during allocation of a memory slot. Signed-off-by: Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> Index: linux-2.6/drivers/kvm/kvm_main.c =================================================================== --- linux-2.6.orig/drivers/kvm/kvm_main.c +++ linux-2.6/drivers/kvm/kvm_main.c @@ -245,7 +245,8 @@ static void kvm_free_physmem_slot(struct if (!dont || free->phys_mem != dont->phys_mem) if (free->phys_mem) { for (i = 0; i < free->npages; ++i) - __free_page(free->phys_mem[i]); + if (free->phys_mem[i]) + __free_page(free->phys_mem[i]); vfree(free->phys_mem); } ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> ` (3 preceding siblings ...) 2006-12-28 10:15 ` [PATCH 8/8] KVM: Fix oops on oom Avi Kivity @ 2006-12-28 10:33 ` Ingo Molnar [not found] ` <20061228103345.GA4708-X9Un+BFzKDI@public.gmane.org> 2006-12-28 12:42 ` [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() Ingo Molnar 5 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 10:33 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel btw., we've got a problem with KVM in -rt: BUG: scheduling while atomic: qemu/0x00000001/11445, CPU#0 [<c0104fce>] dump_trace+0x63/0x1e8 [<c010516c>] show_trace_log_lvl+0x19/0x2e [<c0105541>] show_trace+0x12/0x14 [<c0105557>] dump_stack+0x14/0x16 [<c0322662>] __schedule+0xae/0xd9f [<c0323597>] schedule+0xec/0x10f [<c0324017>] rt_spin_lock_slowlock+0xd2/0x15b [<c03244a6>] rt_spin_lock+0x1e/0x20 [<c0169d94>] kunmap_high+0x11/0x99 [<c0121139>] kunmap+0x40/0x42 [<c0121171>] kunmap_virt+0x36/0x38 [<f8f017b7>] emulator_read_std+0x92/0xaf [kvm] [<f8f05292>] x86_emulate_memop+0xc6/0x295c [kvm] [<f8f018c3>] emulate_instruction+0xef/0x202 [kvm] [<f8e0f2a7>] handle_exception+0x107/0x1c8 [kvm_intel] [<f8e0ef0c>] kvm_vmx_return+0x145/0x18d [kvm_intel] [<f8f0253c>] kvm_dev_ioctl+0x253/0xd76 [kvm] [<c01874f1>] do_ioctl+0x21/0x66 [<c018778b>] vfs_ioctl+0x255/0x268 [<c01877e6>] sys_ioctl+0x48/0x62 [<c0103f8d>] syscall_call+0x7/0xb [<4a4538b2>] 0x4a4538b2 NOTE: this is not a worry for upstream kernel, it is caused by PREEMPT_RT scheduling in previously atomic APIs like kunmap(). But KVM used to work pretty nicely in -rt and this problem got introduced fairly recently, related to some big-page changes IIRC. Any suggestions of how to best fix this in -rt? Basically, it would be nice to decouple KVM from get_cpu/preempt_disable type of interfaces as much as possible, to make it fully preemptible. Below are my current fixups for KVM, but the problem above looks harder to fix. Ingo --- drivers/kvm/vmx.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) Index: linux/drivers/kvm/vmx.c =================================================================== --- linux.orig/drivers/kvm/vmx.c +++ linux/drivers/kvm/vmx.c @@ -117,7 +117,7 @@ static void vmcs_clear(struct vmcs *vmcs static void __vcpu_clear(void *arg) { struct kvm_vcpu *vcpu = arg; - int cpu = smp_processor_id(); + int cpu = raw_smp_processor_id(); if (vcpu->cpu == cpu) vmcs_clear(vcpu->vmcs); @@ -576,7 +576,7 @@ static struct vmcs *alloc_vmcs_cpu(int c static struct vmcs *alloc_vmcs(void) { - return alloc_vmcs_cpu(smp_processor_id()); + return alloc_vmcs_cpu(raw_smp_processor_id()); } static void free_vmcs(struct vmcs *vmcs) ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228103345.GA4708-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228103345.GA4708-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 11:04 ` Avi Kivity [not found] ` <4593A4B7.2070404-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 11:04 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > btw., we've got a problem with KVM in -rt: > > BUG: scheduling while atomic: qemu/0x00000001/11445, CPU#0 > [<c0104fce>] dump_trace+0x63/0x1e8 > [<c010516c>] show_trace_log_lvl+0x19/0x2e > [<c0105541>] show_trace+0x12/0x14 > [<c0105557>] dump_stack+0x14/0x16 > [<c0322662>] __schedule+0xae/0xd9f > [<c0323597>] schedule+0xec/0x10f > [<c0324017>] rt_spin_lock_slowlock+0xd2/0x15b > [<c03244a6>] rt_spin_lock+0x1e/0x20 > [<c0169d94>] kunmap_high+0x11/0x99 > [<c0121139>] kunmap+0x40/0x42 > [<c0121171>] kunmap_virt+0x36/0x38 > [<f8f017b7>] emulator_read_std+0x92/0xaf [kvm] > [<f8f05292>] x86_emulate_memop+0xc6/0x295c [kvm] > [<f8f018c3>] emulate_instruction+0xef/0x202 [kvm] > [<f8e0f2a7>] handle_exception+0x107/0x1c8 [kvm_intel] > [<f8e0ef0c>] kvm_vmx_return+0x145/0x18d [kvm_intel] > [<f8f0253c>] kvm_dev_ioctl+0x253/0xd76 [kvm] > [<c01874f1>] do_ioctl+0x21/0x66 > [<c018778b>] vfs_ioctl+0x255/0x268 > [<c01877e6>] sys_ioctl+0x48/0x62 > [<c0103f8d>] syscall_call+0x7/0xb > [<4a4538b2>] 0x4a4538b2 > > NOTE: this is not a worry for upstream kernel, it is caused by > PREEMPT_RT scheduling in previously atomic APIs like kunmap(). But KVM > used to work pretty nicely in -rt and this problem got introduced fairly > recently, related to some big-page changes IIRC. > This is not a recent change. The x86 emulator is fetching an instruction or a memory operand. > Any suggestions of how to best fix this in -rt? Basically, it would be > nice to decouple KVM from get_cpu/preempt_disable type of interfaces as > much as possible, to make it fully preemptible. kvm on intel uses hidden cpu registers, so it can't be migrated as a normal process. We'd need a hook on context switch to check if we migrated to another cpu, issue an ipi to unload the registers on the previous cpu, and reload the registers on the current cpu. The hook would need to be protected from context switches. See vmx.c:vmx_vcpu_load() for the gory details. The actual launch would require a small non-preemptible section as well, on both intel and AMD. Talking about the scheduler (and to the scheduler's author :), it would be nice to hook the migration weight algorithm too. kvm guests are considerably more expensive to migrate, at least on intel. > Below are my current > fixups for KVM, but the problem above looks harder to fix. > These make sense for mainline (especially the first hunk), so I'll apply them. > Ingo > > --- > drivers/kvm/vmx.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > Index: linux/drivers/kvm/vmx.c > =================================================================== > --- linux.orig/drivers/kvm/vmx.c > +++ linux/drivers/kvm/vmx.c > @@ -117,7 +117,7 @@ static void vmcs_clear(struct vmcs *vmcs > static void __vcpu_clear(void *arg) > { > struct kvm_vcpu *vcpu = arg; > - int cpu = smp_processor_id(); > + int cpu = raw_smp_processor_id(); > > if (vcpu->cpu == cpu) > vmcs_clear(vcpu->vmcs); > @@ -576,7 +576,7 @@ static struct vmcs *alloc_vmcs_cpu(int c > > static struct vmcs *alloc_vmcs(void) > { > - return alloc_vmcs_cpu(smp_processor_id()); > + return alloc_vmcs_cpu(raw_smp_processor_id()); > } > > static void free_vmcs(struct vmcs *vmcs) > -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593A4B7.2070404-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593A4B7.2070404-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 11:23 ` Ingo Molnar [not found] ` <20061228112356.GA14386-X9Un+BFzKDI@public.gmane.org> 2006-12-28 11:30 ` Ingo Molnar 1 sibling, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 11:23 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > >NOTE: this is not a worry for upstream kernel, it is caused by > >PREEMPT_RT scheduling in previously atomic APIs like kunmap(). But > >KVM used to work pretty nicely in -rt and this problem got introduced > >fairly recently, related to some big-page changes IIRC. > > This is not a recent change. The x86 emulator is fetching an > instruction or a memory operand. well, the fact that it's somehow ending up in highmem and thus the kmap becomes nontrivial could be a recent change maybe? > > Any suggestions of how to best fix this in -rt? Basically, it would > > be nice to decouple KVM from get_cpu/preempt_disable type of > > interfaces as much as possible, to make it fully preemptible. > > kvm on intel uses hidden cpu registers, so it can't be migrated as a > normal process. [...] oh, granted. My suggestion would be to make it more atomic - not to break the physical attachment of a KVM context to a physical CPU (which is unbreakable, obviously). > [...] We'd need a hook on context switch to check if we migrated to > another cpu, issue an ipi to unload the registers on the previous cpu, > and reload the registers on the current cpu. The hook would need to > be protected from context switches. See vmx.c:vmx_vcpu_load() for the > gory details. The actual launch would require a small non-preemptible > section as well, on both intel and AMD. actually, another, possibly better approach would be to attach a KVM context to tasks, automatically loaded/unloaded upon switch_to(). No need to do IPIs. Would this be fast enough? Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228112356.GA14386-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228112356.GA14386-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 12:21 ` Avi Kivity 2006-12-28 13:15 ` Ingo Molnar 1 sibling, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 12:21 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >>> NOTE: this is not a worry for upstream kernel, it is caused by >>> PREEMPT_RT scheduling in previously atomic APIs like kunmap(). But >>> KVM used to work pretty nicely in -rt and this problem got introduced >>> fairly recently, related to some big-page changes IIRC. >>> >> This is not a recent change. The x86 emulator is fetching an >> instruction or a memory operand. >> > > well, the fact that it's somehow ending up in highmem and thus the kmap > becomes nontrivial could be a recent change maybe? > > Guest memory was always highmem. If a particular page is highmem is of course runtime dependent. >>> Any suggestions of how to best fix this in -rt? Basically, it would >>> be nice to decouple KVM from get_cpu/preempt_disable type of >>> interfaces as much as possible, to make it fully preemptible. >>> >> kvm on intel uses hidden cpu registers, so it can't be migrated as a >> normal process. [...] >> > > oh, granted. My suggestion would be to make it more atomic - not to > break the physical attachment of a KVM context to a physical CPU (which > is unbreakable, obviously). > > >> [...] We'd need a hook on context switch to check if we migrated to >> another cpu, issue an ipi to unload the registers on the previous cpu, >> and reload the registers on the current cpu. The hook would need to >> be protected from context switches. See vmx.c:vmx_vcpu_load() for the >> gory details. The actual launch would require a small non-preemptible >> section as well, on both intel and AMD. >> > > actually, another, possibly better approach would be to attach a KVM > context to tasks, automatically loaded/unloaded upon switch_to(). No > need to do IPIs. Would this be fast enough? > You don't want or need to unload on switching out, only on migration. Since migrations are hopefully few and far between, it's best to let them handle the heavy lifting and keep non-migrating context switches cheap. If we touch the scheduler, maybe we can let the migration threads call a kvm callback to unload the vmx context. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228112356.GA14386-X9Un+BFzKDI@public.gmane.org> 2006-12-28 12:21 ` Avi Kivity @ 2006-12-28 13:15 ` Ingo Molnar 1 sibling, 0 replies; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 13:15 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Ingo Molnar <mingo-X9Un+BFzKDI@public.gmane.org> wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > > >NOTE: this is not a worry for upstream kernel, it is caused by > > >PREEMPT_RT scheduling in previously atomic APIs like kunmap(). But > > >KVM used to work pretty nicely in -rt and this problem got introduced > > >fairly recently, related to some big-page changes IIRC. > > > > This is not a recent change. The x86 emulator is fetching an > > instruction or a memory operand. > > well, the fact that it's somehow ending up in highmem and thus the > kmap becomes nontrivial could be a recent change maybe? i have worked the problem around via the change below. This affects 32-bit hosts only, and only on CONFIG_PREEMPT_RT. with this and the other two patches kvm/vmx works on 32-bit -rt hosts. Ingo Index: linux/drivers/kvm/kvm_main.c =================================================================== --- linux.orig/drivers/kvm/kvm_main.c +++ linux/drivers/kvm/kvm_main.c @@ -628,8 +628,12 @@ raced: memset(new.phys_mem, 0, npages * sizeof(struct page *)); for (i = 0; i < npages; ++i) { +#ifdef CONFIG_PREEMPT_RT + new.phys_mem[i] = alloc_page(GFP_USER | __GFP_ZERO); +#else new.phys_mem[i] = alloc_page(GFP_HIGHUSER | __GFP_ZERO); +#endif if (!new.phys_mem[i]) goto out_free; } ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593A4B7.2070404-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 2006-12-28 11:23 ` Ingo Molnar @ 2006-12-28 11:30 ` Ingo Molnar [not found] ` <20061228113038.GA16190-X9Un+BFzKDI@public.gmane.org> 1 sibling, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 11:30 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > Talking about the scheduler (and to the scheduler's author :), it > would be nice to hook the migration weight algorithm too. kvm guests > are considerably more expensive to migrate, at least on intel. what do you mean precisely, and where does the cost come from? To me it seems the cost of loading/saving a VMX context on Intel is the biggest cost, and that's roughly the same cost for VM exits/enters, regardless of what the scheduler does. What am i missing? Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228113038.GA16190-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228113038.GA16190-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 12:32 ` Avi Kivity [not found] ` <4593B948.5090009-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 12:32 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >> Talking about the scheduler (and to the scheduler's author :), it >> would be nice to hook the migration weight algorithm too. kvm guests >> are considerably more expensive to migrate, at least on intel. >> > > what do you mean precisely, and where does the cost come from? To me it > seems the cost of loading/saving a VMX context on Intel is the biggest > cost, and that's roughly the same cost for VM exits/enters, regardless > of what the scheduler does. What am i missing? > There are three types of vmx context switches. 1. The most common switch is a vm entry/exit pair (vmresume instruction). The context is already loaded into the cpu (via a vmptrld, below), and we enter into a preloaded VM. In addition, register access implicitly addresses the loaded context. 2. The second most common switch is loading another context. If we have several VMs on a uniprocessor machine, the execution sequence looks like this: vmptrld vm1 ... access context registers (vmread/vmwrite) vmresume ... access context registers (vmread/vmwrite) ... access context registers (vmread/vmwrite) vmresume ... access context registers (vmread/vmwrite) ... access context registers (vmread/vmwrite) vmresume ... access context registers (vmread/vmwrite) Linux context switch to another kvm process vmptrld vm2 ... access context registers (vmread/vmwrite) vmresume ... access context registers (vmread/vmwrite) ... access context registers (vmread/vmwrite) vmresume ... access context registers (vmread/vmwrite) Linux context switch to back to original process vmptrld vm1 Linux context switches to non-kvm processes and back need no special handling. The processor caches the current VM context, and possibly more than one. 3. The most expensive vmx context switch involves cpu migration: cpu 0: vmclear vm1 /* decache vmx context into memory */ cpu 1: vmptrld vm1 ... vm register access vmlaunch The vmlaunch instruction, like vmresume, causes a VM entry, but is documented to be significantly more expensive. It is required after a vmclear. Currently, the vmclear is performed by an ipi, because we can only detect migration after the fact. However, if we enlist the migration threads, we can vmclear before the process has left the cpu. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593B948.5090009-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593B948.5090009-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 13:37 ` Ingo Molnar [not found] ` <20061228133746.GC3392-X9Un+BFzKDI@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 13:37 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > 3. The most expensive vmx context switch involves cpu migration: > > cpu 0: vmclear vm1 /* decache vmx context into memory */ > cpu 1: vmptrld vm1 > ... vm register access > vmlaunch ah. And you optimize this away if previously-used-CPU == curr-CPU and the vmx context loaded on this CPU is the same as we are trying to run now, right? > The vmlaunch instruction, like vmresume, causes a VM entry, but is > documented to be significantly more expensive. It is required after a > vmclear. > > Currently, the vmclear is performed by an ipi, because we can only > detect migration after the fact. However, if we enlist the migration > threads, we can vmclear before the process has left the cpu. the most common type of migration isnt even in the migration threads but in simple try_to_wake_up(). And that call often does not run on the CPU that has the VMX state ... i see no easy solution here - this really parallels all the lazy-FPU problems. Would it really be all that expensive to just save/load the VMX state in switch_to()? As SVN has shown it, we can rely on VMX state save/load to become faster in the future. So we definitely shouldnt design for a small-scale overhead in first-generation silicon. Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228133746.GC3392-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228133746.GC3392-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 13:49 ` Avi Kivity [not found] ` <4593CB61.5050709-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 13:49 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >> 3. The most expensive vmx context switch involves cpu migration: >> >> cpu 0: vmclear vm1 /* decache vmx context into memory */ >> cpu 1: vmptrld vm1 >> ... vm register access >> vmlaunch >> > > ah. And you optimize this away if previously-used-CPU == curr-CPU and > the vmx context loaded on this CPU is the same as we are trying to run > now, right? > > Yes. If we changed cpu, we do a ipi vmclear and a vmptrld. If we just changed vm, we only do a vmptrld. If nothing changed, we do nothing. >> The vmlaunch instruction, like vmresume, causes a VM entry, but is >> documented to be significantly more expensive. It is required after a >> vmclear. >> >> Currently, the vmclear is performed by an ipi, because we can only >> detect migration after the fact. However, if we enlist the migration >> threads, we can vmclear before the process has left the cpu. >> > > the most common type of migration isnt even in the migration threads but > in simple try_to_wake_up(). And that call often does not run on the CPU > that has the VMX state ... > > i see no easy solution here - this really parallels all the lazy-FPU > problems. Would it really be all that expensive to just save/load the > VMX state in switch_to()? The Intel documentation explicitly states that a vmlaunch (required after flushing the vmx state using a vmclear) is much more expensive than a vmresume. I'd expect the difference to become more important if they cache multiple VMs on one core, which seems to be the point of all this complexity. > As SVN has shown it, we can rely on VMX state > save/load to become faster in the future. So we definitely shouldnt > design for a small-scale overhead in first-generation silicon. > In this case I think the documentation indicates their long term plans. However, the only real answer is to measure. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593CB61.5050709-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593CB61.5050709-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 13:50 ` Ingo Molnar [not found] ` <20061228135020.GA7606-X9Un+BFzKDI@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 13:50 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > As SVN has shown it, we can rely on VMX state save/load to become > > faster in the future. So we definitely shouldnt design for a > > small-scale overhead in first-generation silicon. > > In this case I think the documentation indicates their long term > plans. However, the only real answer is to measure. yeah. Would be nice to see some hard numbers about how many cycles all these context load/save variants take. Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228135020.GA7606-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228135020.GA7606-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 13:58 ` Avi Kivity [not found] ` <4593CD74.6060202-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 13:58 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >>> As SVN has shown it, we can rely on VMX state save/load to become >>> faster in the future. So we definitely shouldnt design for a >>> small-scale overhead in first-generation silicon. >>> >> In this case I think the documentation indicates their long term >> plans. However, the only real answer is to measure. >> > > yeah. Would be nice to see some hard numbers about how many cycles all > these context load/save variants take. > > PIO latency on AMD (including a trip to qemu and back) is 5500 cycles [1]. Intel is significantly higher. [1] http://virt.kernelnewbies.org/KVM/Performance -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593CD74.6060202-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593CD74.6060202-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 14:07 ` Ingo Molnar [not found] ` <20061228140742.GA10033-X9Un+BFzKDI@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 14:07 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > >yeah. Would be nice to see some hard numbers about how many cycles all > >these context load/save variants take. > > PIO latency on AMD (including a trip to qemu and back) is 5500 cycles > [1]. Intel is significantly higher. > > [1] http://virt.kernelnewbies.org/KVM/Performance ok. We need the IPI only on VMX, right? That's because it has cached VM state in the CPU that needs to be flushed out to the public VM area before it can be loaded into another CPU, correct? Is there no cheap way to do this flushing preemptively (say in vcpu_put()), to make the VM context potentially loadable into another CPU? Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228140742.GA10033-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228140742.GA10033-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 14:18 ` Avi Kivity [not found] ` <4593D243.1030301-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 14:18 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >>> yeah. Would be nice to see some hard numbers about how many cycles all >>> these context load/save variants take. >>> >> PIO latency on AMD (including a trip to qemu and back) is 5500 cycles >> [1]. Intel is significantly higher. >> >> [1] http://virt.kernelnewbies.org/KVM/Performance >> > > ok. We need the IPI only on VMX, right? Correct. > That's because it has cached VM > state in the CPU that needs to be flushed out to the public VM area > before it can be loaded into another CPU, correct? Correct. > Is there no cheap way > to do this flushing preemptively (say in vcpu_put()), to make the VM > context potentially loadable into another CPU? You could issue a vmclear, but that forces a vm entry on the _same_ cpu to use vmlaunch instead of vmresume, which is documented as expensive. If we assume that cpu migrations are much rarer than vm entries (a very safe assumption), then the ipi is better than forcing a vmclear on vcpu_put(). Here's the relevant text from scripture: The following software usage is consistent with these limitations: • VMCLEAR should be executed for a VMCS before it is used for VM entry. • VMLAUNCH should be used for the first VM entry using a VMCS after VMCLEAR has been executed for that VMCS. • VMRESUME should be used for any subsequent VM entry using a VMCS (until the next execution of VMCLEAR for the VMCS). It is expected that, in general, VMRESUME will have lower latency than VMLAUNCH. Since “migrating” a VMCS from one logical processor to another requires use of VMCLEAR (see Section 20.10.1), which sets the launch state of the VMCS to “clear,” such migration requires the next VM entry to be performed using VMLAUNCH. Software developers can avoid the performance cost of increased VM-entry latency by avoiding unnecessary migration of a VMCS from one logical processor to another. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593D243.1030301-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593D243.1030301-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 15:01 ` Ingo Molnar [not found] ` <20061228150104.GB16057-X9Un+BFzKDI@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 15:01 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi@qumranet.com> wrote: > >Is there no cheap way to do this flushing preemptively (say in > >vcpu_put()), to make the VM context potentially loadable into another > >CPU? > > You could issue a vmclear, but that forces a vm entry on the _same_ > cpu to use vmlaunch instead of vmresume, which is documented as > expensive. hm. Could you try to just do a vmresume nevertheless - maybe it works but is undocumented? What would be nice is a 'vmflush' instruction that still keeps the state usable. > It is expected that, in general, VMRESUME will have lower latency than > VMLAUNCH. Since “migrating” a VMCS from one logical processor to > another requires use of VMCLEAR (see Section 20.10.1), which sets the > launch state of the VMCS to “clear,” such migration requires the next > VM entry to be performed using VMLAUNCH. Software developers can avoid > the performance cost of increased VM-entry latency by avoiding > unnecessary migration of a VMCS from one logical processor to another. doh, very helpful ... Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ kvm-devel mailing list kvm-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/kvm-devel ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228150104.GB16057-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228150104.GB16057-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 15:09 ` Avi Kivity [not found] ` <4593DE1D.8010701-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 15:09 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >>> Is there no cheap way to do this flushing preemptively (say in >>> vcpu_put()), to make the VM context potentially loadable into another >>> CPU? >>> >> You could issue a vmclear, but that forces a vm entry on the _same_ >> cpu to use vmlaunch instead of vmresume, which is documented as >> expensive. >> > > hm. Could you try to just do a vmresume nevertheless - maybe it works > but is undocumented? I'd hate to do that. There are at least three implementations of VT (PD, Core, Core2) and maybe more when steppings and microcode updates are considered. Not to mention having kvm fail when a new cpu is introduced is not a pleasant scenario. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593DE1D.8010701-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593DE1D.8010701-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 15:11 ` Ingo Molnar [not found] ` <20061228151159.GA20279-X9Un+BFzKDI@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 15:11 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > >hm. Could you try to just do a vmresume nevertheless - maybe it works > >but is undocumented? > > I'd hate to do that. There are at least three implementations of VT > (PD, Core, Core2) and maybe more when steppings and microcode updates > are considered. Not to mention having kvm fail when a new cpu is > introduced is not a pleasant scenario. lets check whether it's even possible ... Then we could ask Intel to clarify their position on this. I can see no real reason why a vmclear should really make it impossible to re-use the already loaded context data /on the same CPU/. And that is the important case. When we migrate to another CPU we have to do a VMLAUNCH anyway, upon the first execution of that context. Ingo ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228151159.GA20279-X9Un+BFzKDI@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <20061228151159.GA20279-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 15:25 ` Avi Kivity [not found] ` <4593E1E3.2020800-atKUWr5tajBWk0Htik3J/w@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 15:25 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Ingo Molnar wrote: > * Avi Kivity <avi-atKUWr5tajBWk0Htik3J/w@public.gmane.org> wrote: > > >>> hm. Could you try to just do a vmresume nevertheless - maybe it works >>> but is undocumented? >>> >> I'd hate to do that. There are at least three implementations of VT >> (PD, Core, Core2) and maybe more when steppings and microcode updates >> are considered. Not to mention having kvm fail when a new cpu is >> introduced is not a pleasant scenario. >> > > lets check whether it's even possible ... Then we could ask Intel to > clarify their position on this. I can see no real reason why a vmclear > should really make it impossible to re-use the already loaded context > data /on the same CPU/. And that is the important case. When we migrate > to another CPU we have to do a VMLAUNCH anyway, upon the first execution > of that context. > > There's a documented error number, 5, for "VMRESUME with non-launched vmcs". Furthermore, I checked, and it indeed doesn't work. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <4593E1E3.2020800-atKUWr5tajBWk0Htik3J/w@public.gmane.org>]
* Re: [PATCH 0/8] KVM updates for 2.6.20-rc2 [not found] ` <4593E1E3.2020800-atKUWr5tajBWk0Htik3J/w@public.gmane.org> @ 2006-12-28 15:28 ` Avi Kivity 0 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 15:28 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel Avi Kivity wrote: > > There's a documented error number, 5, for "VMRESUME with non-launched > vmcs". > > Furthermore, I checked, and it indeed doesn't work. > > btw, there is a large number of checks in the vmx spec, and I know of none which are not implemented. It seems Intel is (understandably) wary of software exploiting undocumented behavior, which would force them to support it in future processors to maintain compatibility. -- error compiling committee.c: too many arguments to function ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() [not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org> ` (4 preceding siblings ...) 2006-12-28 10:33 ` [PATCH 0/8] KVM updates for 2.6.20-rc2 Ingo Molnar @ 2006-12-28 12:42 ` Ingo Molnar 2006-12-28 12:56 ` Avi Kivity 5 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 12:42 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel, Andrew Morton, Linus Torvalds, linux-kernel Subject: [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() From: Ingo Molnar <mingo-X9Un+BFzKDI@public.gmane.org> fix a GFP_KERNEL allocation in atomic section bug: kvm_dev_ioctl_create_vcpu() called kvm_mmu_init(), which calls alloc_pages(), while holding the vcpu. The fix is to set up the MMU state earlier, it does not require a loaded CPU state. (NOTE: free_vcpus does an kvm_mmu_destroy() call so there's no need for any extra teardown branch on allocation failure here.) found in 2.6.20-rc2-rt1. Signed-off-by: Ingo Molnar <mingo-X9Un+BFzKDI@public.gmane.org> --- drivers/kvm/kvm_main.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) Index: linux/drivers/kvm/kvm_main.c =================================================================== --- linux.orig/drivers/kvm/kvm_main.c +++ linux/drivers/kvm/kvm_main.c @@ -522,12 +522,12 @@ static int kvm_dev_ioctl_create_vcpu(str if (r < 0) goto out_free_vcpus; - kvm_arch_ops->vcpu_load(vcpu); + r = kvm_mmu_init(vcpu); + if (r < 0) + goto out_free_vcpus; + kvm_arch_ops->vcpu_load(vcpu); r = kvm_arch_ops->vcpu_setup(vcpu); - if (r >= 0) - r = kvm_mmu_init(vcpu); - vcpu_put(vcpu); if (r < 0) ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() 2006-12-28 12:42 ` [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() Ingo Molnar @ 2006-12-28 12:56 ` Avi Kivity 2006-12-28 12:55 ` Ingo Molnar 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 12:56 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel, linux-kernel, Andrew Morton, Linus Torvalds Ingo Molnar wrote: > Subject: [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() > From: Ingo Molnar <mingo@elte.hu> > > fix a GFP_KERNEL allocation in atomic section bug: > kvm_dev_ioctl_create_vcpu() called kvm_mmu_init(), which calls > alloc_pages(), while holding the vcpu. The fix is to set up the MMU > state earlier, it does not require a loaded CPU state. > Yes it does. It calls nonpaging_init_context() which calls vmx_set_cr3() which promptly trashes address space of the VM that previously ran on that vcpu (or, if there were none, logs a vmwrite error). -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() 2006-12-28 12:56 ` Avi Kivity @ 2006-12-28 12:55 ` Ingo Molnar [not found] ` <20061228125544.GA31207-X9Un+BFzKDI@public.gmane.org> 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 12:55 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Andrew Morton, Linus Torvalds * Avi Kivity <avi@qumranet.com> wrote: > >fix a GFP_KERNEL allocation in atomic section bug: > >kvm_dev_ioctl_create_vcpu() called kvm_mmu_init(), which calls > >alloc_pages(), while holding the vcpu. The fix is to set up the MMU > >state earlier, it does not require a loaded CPU state. > > Yes it does. It calls nonpaging_init_context() which calls > vmx_set_cr3() which promptly trashes address space of the VM that > previously ran on that vcpu (or, if there were none, logs a vmwrite > error). ok, i missed that. Nevertheless the problem of the nonatomic alloc remains. I guess a kvm_mmu_init() needs to be split into kvm_mmu_create() and kvm_mmu_setup()? Ingo ^ permalink raw reply [flat|nested] 36+ messages in thread
[parent not found: <20061228125544.GA31207-X9Un+BFzKDI@public.gmane.org>]
* [patch, try#2] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() [not found] ` <20061228125544.GA31207-X9Un+BFzKDI@public.gmane.org> @ 2006-12-28 13:08 ` Ingo Molnar 2006-12-28 13:14 ` Avi Kivity 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 13:08 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel, Andrew Morton, Linus Torvalds, linux-kernel * Ingo Molnar <mingo-X9Un+BFzKDI@public.gmane.org> wrote: > > Yes it does. It calls nonpaging_init_context() which calls > > vmx_set_cr3() which promptly trashes address space of the VM that > > previously ran on that vcpu (or, if there were none, logs a vmwrite > > error). > > ok, i missed that. Nevertheless the problem of the nonatomic alloc > remains. I guess a kvm_mmu_init() needs to be split into > kvm_mmu_create() and kvm_mmu_setup()? the patch below implements this fix. Lightly tested on 32-bit/VMX on 2.6.20-rc2-rt2 but seems to have done the trick. Ingo --------------------> Subject: [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() From: Ingo Molnar <mingo-X9Un+BFzKDI@public.gmane.org> fix an GFP_KERNEL allocation in atomic section: kvm_dev_ioctl_create_vcpu() called kvm_mmu_init(), which calls alloc_pages(), while holding the vcpu. The fix is to set up the MMU state in two phases: kvm_mmu_create() and kvm_mmu_setup(). (NOTE: free_vcpus does an kvm_mmu_destroy() call so there's no need for any extra teardown branch on allocation/init failure here.) Signed-off-by: Ingo Molnar <mingo-X9Un+BFzKDI@public.gmane.org> --- drivers/kvm/kvm.h | 3 ++- drivers/kvm/kvm_main.c | 10 ++++++---- drivers/kvm/mmu.c | 24 +++++++++--------------- 3 files changed, 17 insertions(+), 20 deletions(-) Index: linux/drivers/kvm/kvm.h =================================================================== --- linux.orig/drivers/kvm/kvm.h +++ linux/drivers/kvm/kvm.h @@ -319,7 +319,8 @@ int kvm_init_arch(struct kvm_arch_ops *o void kvm_exit_arch(void); void kvm_mmu_destroy(struct kvm_vcpu *vcpu); -int kvm_mmu_init(struct kvm_vcpu *vcpu); +int kvm_mmu_create(struct kvm_vcpu *vcpu); +int kvm_mmu_setup(struct kvm_vcpu *vcpu); int kvm_mmu_reset_context(struct kvm_vcpu *vcpu); void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot); Index: linux/drivers/kvm/kvm_main.c =================================================================== --- linux.orig/drivers/kvm/kvm_main.c +++ linux/drivers/kvm/kvm_main.c @@ -522,12 +522,14 @@ static int kvm_dev_ioctl_create_vcpu(str if (r < 0) goto out_free_vcpus; - kvm_arch_ops->vcpu_load(vcpu); + r = kvm_mmu_create(vcpu); + if (r < 0) + goto out_free_vcpus; - r = kvm_arch_ops->vcpu_setup(vcpu); + kvm_arch_ops->vcpu_load(vcpu); + r = kvm_mmu_setup(vcpu); if (r >= 0) - r = kvm_mmu_init(vcpu); - + r = kvm_arch_ops->vcpu_setup(vcpu); vcpu_put(vcpu); if (r < 0) Index: linux/drivers/kvm/mmu.c =================================================================== --- linux.orig/drivers/kvm/mmu.c +++ linux/drivers/kvm/mmu.c @@ -639,28 +639,22 @@ error_1: return -ENOMEM; } -int kvm_mmu_init(struct kvm_vcpu *vcpu) +int kvm_mmu_create(struct kvm_vcpu *vcpu) { - int r; - ASSERT(vcpu); ASSERT(!VALID_PAGE(vcpu->mmu.root_hpa)); ASSERT(list_empty(&vcpu->free_pages)); - r = alloc_mmu_pages(vcpu); - if (r) - goto out; - - r = init_kvm_mmu(vcpu); - if (r) - goto out_free_pages; + return alloc_mmu_pages(vcpu); +} - return 0; +int kvm_mmu_setup(struct kvm_vcpu *vcpu) +{ + ASSERT(vcpu); + ASSERT(!VALID_PAGE(vcpu->mmu.root_hpa)); + ASSERT(!list_empty(&vcpu->free_pages)); -out_free_pages: - free_mmu_pages(vcpu); -out: - return r; + return init_kvm_mmu(vcpu); } void kvm_mmu_destroy(struct kvm_vcpu *vcpu) ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [patch, try#2] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() 2006-12-28 13:08 ` [patch, try#2] " Ingo Molnar @ 2006-12-28 13:14 ` Avi Kivity 2006-12-28 13:23 ` Ingo Molnar 0 siblings, 1 reply; 36+ messages in thread From: Avi Kivity @ 2006-12-28 13:14 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel, linux-kernel, Andrew Morton, Linus Torvalds Ingo Molnar wrote: > Subject: [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() > From: Ingo Molnar <mingo@elte.hu> > > fix an GFP_KERNEL allocation in atomic section: > kvm_dev_ioctl_create_vcpu() called kvm_mmu_init(), which calls > alloc_pages(), while holding the vcpu. > > The fix is to set up the MMU state in two phases: kvm_mmu_create() and > kvm_mmu_setup(). > > (NOTE: free_vcpus does an kvm_mmu_destroy() call so there's no need > for any extra teardown branch on allocation/init failure here.) > > Signed-off-by: Ingo Molnar <mingo@elte.hu> > Applied, thanks. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [patch, try#2] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() 2006-12-28 13:14 ` Avi Kivity @ 2006-12-28 13:23 ` Ingo Molnar 2006-12-28 13:30 ` Avi Kivity 0 siblings, 1 reply; 36+ messages in thread From: Ingo Molnar @ 2006-12-28 13:23 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm-devel, linux-kernel, Andrew Morton, Linus Torvalds * Avi Kivity <avi@qumranet.com> wrote: > Ingo Molnar wrote: > >Subject: [patch] kvm: fix GFP_KERNEL allocation in atomic section in > >kvm_dev_ioctl_create_vcpu() > >From: Ingo Molnar <mingo@elte.hu> > > > >fix an GFP_KERNEL allocation in atomic section: > >kvm_dev_ioctl_create_vcpu() called kvm_mmu_init(), which calls > >alloc_pages(), while holding the vcpu. > > > >The fix is to set up the MMU state in two phases: kvm_mmu_create() and > >kvm_mmu_setup(). > > > >(NOTE: free_vcpus does an kvm_mmu_destroy() call so there's no need > > for any extra teardown branch on allocation/init failure here.) > > > >Signed-off-by: Ingo Molnar <mingo@elte.hu> > > > > Applied, thanks. great! I've got a security related question as well: vcpu_load() sets up a physical CPU's VM registers/state, and vcpu_put() drops that. But vcpu_put() only does a put_cpu() call - it does not tear down any VM state that has been loaded into the CPU. Is it guaranteed that (hostile) user-space cannot use that VM state in any unauthorized way? The state is still loaded while arbitrary tasks execute on the CPU. The next vcpu_load() will then override it, but the state lingers around forever. The new x86 VM instructions: vmclear, vmlaunch, vmresume, vmptrld, vmread, vmwrite, vmxoff, vmxon are all privileged so i guess it should be mostly safe - i'm just wondering whether you thought about this attack angle. ultimately we want to integrate VM state management into the scheduler and the context-switch lowlevel arch code, but right now CPU state management is done by the KVM 'driver' and there's nothing that isolates other tasks from possible side-effects of a loaded VMX/SVN state. Ingo ^ permalink raw reply [flat|nested] 36+ messages in thread
* Re: [patch, try#2] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() 2006-12-28 13:23 ` Ingo Molnar @ 2006-12-28 13:30 ` Avi Kivity 0 siblings, 0 replies; 36+ messages in thread From: Avi Kivity @ 2006-12-28 13:30 UTC (permalink / raw) To: Ingo Molnar; +Cc: kvm-devel, linux-kernel, Andrew Morton, Linus Torvalds Ingo Molnar wrote: > I've got a security related question as well: vcpu_load() sets up a > physical CPU's VM registers/state, and vcpu_put() drops that. But > vcpu_put() only does a put_cpu() call - it does not tear down any VM > state that has been loaded into the CPU. Is it guaranteed that (hostile) > user-space cannot use that VM state in any unauthorized way? The state > is still loaded while arbitrary tasks execute on the CPU. The next > vcpu_load() will then override it, but the state lingers around forever. > > The new x86 VM instructions: vmclear, vmlaunch, vmresume, vmptrld, > vmread, vmwrite, vmxoff, vmxon are all privileged so i guess it should > be mostly safe - i'm just wondering whether you thought about this > attack angle. > Yes. Userspace cannot snoop on a VM state. > ultimately we want to integrate VM state management into the scheduler > and the context-switch lowlevel arch code, but right now CPU state > management is done by the KVM 'driver' and there's nothing that isolates > other tasks from possible side-effects of a loaded VMX/SVN state. > AFAICS in vmx root mode the vm state only affects vmx instructions; SVM has no architecturally hidden state. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 36+ messages in thread
end of thread, other threads:[~2007-01-01 8:20 UTC | newest]
Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-12-28 10:07 [PATCH 0/8] KVM updates for 2.6.20-rc2 Avi Kivity
2006-12-28 10:08 ` [PATCH 1/8] KVM: Use boot_cpu_data instead of current_cpu_data Avi Kivity
2006-12-28 10:09 ` [PATCH 2/8] KVM: Simplify is_long_mode() Avi Kivity
2006-12-28 10:12 ` [PATCH 5/8] KVM: Move common msr handling to arch independent code Avi Kivity
2006-12-28 10:13 ` [PATCH 6/8] KVM: More msr misery Avi Kivity
[not found] ` <45939755.7010603-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 10:10 ` [PATCH 3/8] KVM: Initialize kvm_arch_ops on unload Avi Kivity
2006-12-28 10:11 ` [PATCH 4/8] KVM: Implement a few system configuration msrs Avi Kivity
[not found] ` <20061228101117.65A392500F7-LjA0eNSCdXrQnzwC+xcbyw@public.gmane.org>
2007-01-01 0:07 ` Ingo Oeser
[not found] ` <200701010107.18008.ioe-lkml-MFZNYGWX9eyELgA04lAiVw@public.gmane.org>
2007-01-01 8:20 ` Avi Kivity
2006-12-28 10:14 ` [PATCH 7/8] KVM: Rename some msrs Avi Kivity
2006-12-28 10:15 ` [PATCH 8/8] KVM: Fix oops on oom Avi Kivity
2006-12-28 10:33 ` [PATCH 0/8] KVM updates for 2.6.20-rc2 Ingo Molnar
[not found] ` <20061228103345.GA4708-X9Un+BFzKDI@public.gmane.org>
2006-12-28 11:04 ` Avi Kivity
[not found] ` <4593A4B7.2070404-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 11:23 ` Ingo Molnar
[not found] ` <20061228112356.GA14386-X9Un+BFzKDI@public.gmane.org>
2006-12-28 12:21 ` Avi Kivity
2006-12-28 13:15 ` Ingo Molnar
2006-12-28 11:30 ` Ingo Molnar
[not found] ` <20061228113038.GA16190-X9Un+BFzKDI@public.gmane.org>
2006-12-28 12:32 ` Avi Kivity
[not found] ` <4593B948.5090009-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 13:37 ` Ingo Molnar
[not found] ` <20061228133746.GC3392-X9Un+BFzKDI@public.gmane.org>
2006-12-28 13:49 ` Avi Kivity
[not found] ` <4593CB61.5050709-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 13:50 ` Ingo Molnar
[not found] ` <20061228135020.GA7606-X9Un+BFzKDI@public.gmane.org>
2006-12-28 13:58 ` Avi Kivity
[not found] ` <4593CD74.6060202-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 14:07 ` Ingo Molnar
[not found] ` <20061228140742.GA10033-X9Un+BFzKDI@public.gmane.org>
2006-12-28 14:18 ` Avi Kivity
[not found] ` <4593D243.1030301-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 15:01 ` Ingo Molnar
[not found] ` <20061228150104.GB16057-X9Un+BFzKDI@public.gmane.org>
2006-12-28 15:09 ` Avi Kivity
[not found] ` <4593DE1D.8010701-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 15:11 ` Ingo Molnar
[not found] ` <20061228151159.GA20279-X9Un+BFzKDI@public.gmane.org>
2006-12-28 15:25 ` Avi Kivity
[not found] ` <4593E1E3.2020800-atKUWr5tajBWk0Htik3J/w@public.gmane.org>
2006-12-28 15:28 ` Avi Kivity
2006-12-28 12:42 ` [patch] kvm: fix GFP_KERNEL allocation in atomic section in kvm_dev_ioctl_create_vcpu() Ingo Molnar
2006-12-28 12:56 ` Avi Kivity
2006-12-28 12:55 ` Ingo Molnar
[not found] ` <20061228125544.GA31207-X9Un+BFzKDI@public.gmane.org>
2006-12-28 13:08 ` [patch, try#2] " Ingo Molnar
2006-12-28 13:14 ` Avi Kivity
2006-12-28 13:23 ` Ingo Molnar
2006-12-28 13:30 ` Avi Kivity
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox