* [PATCH 0/9] Add support for nested SVM (kernel) v5 @ 2008-10-20 17:04 Alexander Graf 2008-10-20 17:04 ` [PATCH 1/9] Clean up VINTR setting v5 Alexander Graf 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi The current generation of virtualization extensions only supports one VM layer. While we can't change that, it is pretty easy to emulate the CPU's behavior and implement the virtualization opcodes ourselves. This patchset does exactly this for SVM. Using it, KVM can run within a VM. Since we're emulating the real CPU's behavior, this should also enable other VMMs to run within KVM. So far I've only tested to run KVM inside the VM though. As always, comments and suggestions are highly welcome. v2 takes most comments from Avi into account. v3 addresses Joergs comments, including - V_INTR_MASKING support - a generic permission checking helper v4 addresses even more comments from Joerg, including - don't use the guest's hsave to store the guest's vmcb in - add nested=<int> flag for kvm-amd.ko, defaults to 0 (off) - include Joerg's VM_CR MSR patch v5 removes the IOPM merging code To be usable, this patchset requires the two simple changes in the userspace part, that I sent to the list with the first version. Known issues: - TODO: #VMEXIT on save/restore - SMP l2 guests break with in-kernel-apic Thanks for reviewing! Alex Alexander Graf (8): Clean up VINTR setting v5 Add helper functions for nested SVM v5 Implement GIF, clgi and stgi v5 Implement hsave v5 Add VMLOAD and VMSAVE handlers v5 Add VMRUN handler v5 Add VMEXIT handler and intercepts v5 Allow setting the SVME bit v5 Joerg Roedel (1): allow read access to MSR_VM_VR arch/x86/kvm/kvm_svm.h | 9 + arch/x86/kvm/svm.c | 682 +++++++++++++++++++++++++++++++++++++++++++- include/asm-x86/kvm_host.h | 5 + 3 files changed, 684 insertions(+), 12 deletions(-) ^ permalink raw reply [flat|nested] 53+ messages in thread
* [PATCH 1/9] Clean up VINTR setting v5 2008-10-20 17:04 [PATCH 0/9] Add support for nested SVM (kernel) v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 2/9] Add helper functions for nested SVM v5 Alexander Graf 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi The current VINTR intercept setters don't look clean to me. To make the code easier to read and enable the possibilty to trap on a VINTR set, this uses a helper function to set the VINTR intercept. v2 uses two distinct functions for setting and clearing the bit Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 20 +++++++++++++++----- 1 files changed, 15 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 05efc4e..4ee5376 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -731,6 +731,16 @@ static void svm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) to_svm(vcpu)->vmcb->save.rflags = rflags; } +static void svm_set_vintr(struct vcpu_svm *svm) +{ + svm->vmcb->control.intercept |= 1ULL << INTERCEPT_VINTR; +} + +static void svm_clear_vintr(struct vcpu_svm *svm) +{ + svm->vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); +} + static struct vmcb_seg *svm_seg(struct kvm_vcpu *vcpu, int seg) { struct vmcb_save_area *save = &to_svm(vcpu)->vmcb->save; @@ -1376,7 +1386,7 @@ static int interrupt_window_interception(struct vcpu_svm *svm, { KVMTRACE_0D(PEND_INTR, &svm->vcpu, handler); - svm->vmcb->control.intercept &= ~(1ULL << INTERCEPT_VINTR); + svm_clear_vintr(svm); svm->vmcb->control.int_ctl &= ~V_IRQ_MASK; /* * If the user space waits to inject interrupts, exit as soon as @@ -1589,7 +1599,7 @@ static void svm_intr_assist(struct kvm_vcpu *vcpu) (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) || (vmcb->control.event_inj & SVM_EVTINJ_VALID)) { /* unable to deliver irq, set pending irq */ - vmcb->control.intercept |= (1ULL << INTERCEPT_VINTR); + svm_set_vintr(svm); svm_inject_irq(svm, 0x0); goto out; } @@ -1649,9 +1659,9 @@ static void do_interrupt_requests(struct kvm_vcpu *vcpu, */ if (!svm->vcpu.arch.interrupt_window_open && (svm->vcpu.arch.irq_summary || kvm_run->request_interrupt_window)) - control->intercept |= 1ULL << INTERCEPT_VINTR; - else - control->intercept &= ~(1ULL << INTERCEPT_VINTR); + svm_set_vintr(svm); + else + svm_clear_vintr(svm); } static int svm_set_tss_addr(struct kvm *kvm, unsigned int addr) -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 2/9] Add helper functions for nested SVM v5 2008-10-20 17:04 ` [PATCH 1/9] Clean up VINTR setting v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf ` (2 more replies) 0 siblings, 3 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi These are helpers for the nested SVM implementation. - nsvm_printk implements a debug printk variant - nested_svm_do calls a handler that can accesses gpa-based memory v3 makes use of the new permission checker Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 88 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 4ee5376..a00421b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -48,6 +48,16 @@ MODULE_LICENSE("GPL"); #define DEBUGCTL_RESERVED_BITS (~(0x3fULL)) +/* Turn on to get debugging output*/ +/* #define NESTED_DEBUG */ + +#ifdef NESTED_DEBUG +#define nsvm_printk(fmt, args...) printk(KERN_INFO fmt, ## args) +#else +static inline void nsvm_printk(char *fmt, ...) { +} +#endif + /* enable NPT for AMD64 and X86 with PAE */ #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) static bool npt_enabled = true; @@ -1145,6 +1155,84 @@ static int vmmcall_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) return 1; } +static int nested_svm_check_permissions(struct vcpu_svm *svm) +{ + if (svm->vmcb->save.cpl) { + printk(KERN_ERR "%s: invalid cpl 0x%x at ip 0x%lx\n", + __func__, svm->vmcb->save.cpl, kvm_rip_read(&svm->vcpu)); + kvm_queue_exception(&svm->vcpu, GP_VECTOR); + return 1; + } + + if (!(svm->vcpu.arch.shadow_efer & MSR_EFER_SVME_MASK) + || !is_paging(&svm->vcpu)) { + kvm_queue_exception(&svm->vcpu, UD_VECTOR); + return 1; + } + + return 0; +} + +static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 gpa) +{ + struct page *page; + + down_read(¤t->mm->mmap_sem); + page = gfn_to_page(svm->vcpu.kvm, gpa >> PAGE_SHIFT); + up_read(¤t->mm->mmap_sem); + + if (is_error_page(page)) { + printk(KERN_ERR "%s: could not find page at 0x%llx\n", + __func__, gpa); + kvm_release_page_clean(page); + kvm_queue_exception(&svm->vcpu, GP_VECTOR); + return NULL; + } + return page; +} + +static int nested_svm_do(struct vcpu_svm *svm, + u64 arg1_gpa, u64 arg2_gpa, void *opaque, + int (*handler)(struct vcpu_svm *svm, + void *arg1, + void *arg2, + void *opaque)) +{ + struct page *arg1_page; + struct page *arg2_page = NULL; + void *arg1; + void *arg2 = NULL; + int retval; + + arg1_page = nested_svm_get_page(svm, arg1_gpa); + if(arg1_page == NULL) + return 1; + + if (arg2_gpa) { + arg2_page = nested_svm_get_page(svm, arg2_gpa); + if(arg2_page == NULL) { + kvm_release_page_clean(arg1_page); + return 1; + } + } + + arg1 = kmap_atomic(arg1_page, KM_USER0); + if (arg2_gpa) + arg2 = kmap_atomic(arg2_page, KM_USER1); + + retval = handler(svm, arg1, arg2, opaque); + + kunmap_atomic(arg1, KM_USER0); + if (arg2_gpa) + kunmap_atomic(arg2, KM_USER1); + + kvm_release_page_dirty(arg1_page); + if (arg2_gpa) + kvm_release_page_dirty(arg2_page); + + return retval; +} + static int invalid_op_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) { -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 3/9] Implement GIF, clgi and stgi v5 2008-10-20 17:04 ` [PATCH 2/9] Add helper functions for nested SVM v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 4/9] Implement hsave v5 Alexander Graf ` (2 more replies) 2008-10-29 13:48 ` [PATCH 2/9] Add helper functions for nested SVM v5 Joerg Roedel 2008-10-30 17:56 ` Anthony Liguori 2 siblings, 3 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi This patch implements the GIF flag and the clgi and stgi instructions that set this flag. Only if the flag is set (default), interrupts can be received by the CPU. To keep the information about that somewhere, this patch adds a new hidden flags vector. that is used to store information that does not go into the vmcb, but is SVM specific. v2 moves the hflags to x86 generic code v3 makes use of the new permission helper Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 42 +++++++++++++++++++++++++++++++++++++++--- include/asm-x86/kvm_host.h | 3 +++ 2 files changed, 42 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index a00421b..62bfa2b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -614,6 +614,8 @@ static void init_vmcb(struct vcpu_svm *svm) save->cr4 = 0; } force_new_asid(&svm->vcpu); + + svm->vcpu.arch.hflags = HF_GIF_MASK; } static int svm_vcpu_reset(struct kvm_vcpu *vcpu) @@ -1233,6 +1235,36 @@ static int nested_svm_do(struct vcpu_svm *svm, return retval; } +static int stgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) +{ + if (nested_svm_check_permissions(svm)) + return 1; + + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; + skip_emulated_instruction(&svm->vcpu); + + svm->vcpu.arch.hflags |= HF_GIF_MASK; + + return 1; +} + +static int clgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) +{ + if (nested_svm_check_permissions(svm)) + return 1; + + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; + skip_emulated_instruction(&svm->vcpu); + + svm->vcpu.arch.hflags &= ~HF_GIF_MASK; + + /* After a CLGI no interrupts should come */ + svm_clear_vintr(svm); + svm->vmcb->control.int_ctl &= ~V_IRQ_MASK; + + return 1; +} + static int invalid_op_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) { @@ -1534,8 +1566,8 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm, [SVM_EXIT_VMMCALL] = vmmcall_interception, [SVM_EXIT_VMLOAD] = invalid_op_interception, [SVM_EXIT_VMSAVE] = invalid_op_interception, - [SVM_EXIT_STGI] = invalid_op_interception, - [SVM_EXIT_CLGI] = invalid_op_interception, + [SVM_EXIT_STGI] = stgi_interception, + [SVM_EXIT_CLGI] = clgi_interception, [SVM_EXIT_SKINIT] = invalid_op_interception, [SVM_EXIT_WBINVD] = emulate_on_interception, [SVM_EXIT_MONITOR] = invalid_op_interception, @@ -1683,6 +1715,9 @@ static void svm_intr_assist(struct kvm_vcpu *vcpu) if (!kvm_cpu_has_interrupt(vcpu)) goto out; + if (!(svm->vcpu.arch.hflags & HF_GIF_MASK)) + goto out; + if (!(vmcb->save.rflags & X86_EFLAGS_IF) || (vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) || (vmcb->control.event_inj & SVM_EVTINJ_VALID)) { @@ -1734,7 +1769,8 @@ static void do_interrupt_requests(struct kvm_vcpu *vcpu, svm->vcpu.arch.interrupt_window_open = (!(control->int_state & SVM_INTERRUPT_SHADOW_MASK) && - (svm->vmcb->save.rflags & X86_EFLAGS_IF)); + (svm->vmcb->save.rflags & X86_EFLAGS_IF) && + (svm->vcpu.arch.hflags & HF_GIF_MASK)); if (svm->vcpu.arch.interrupt_window_open && svm->vcpu.arch.irq_summary) /* diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h index 4546535..421abf7 100644 --- a/include/asm-x86/kvm_host.h +++ b/include/asm-x86/kvm_host.h @@ -253,6 +253,7 @@ struct kvm_vcpu_arch { unsigned long cr3; unsigned long cr4; unsigned long cr8; + u32 hflags; u64 pdptrs[4]; /* pae */ u64 shadow_efer; u64 apic_base; @@ -734,6 +735,8 @@ enum { TASK_SWITCH_GATE = 3, }; +#define HF_GIF_MASK (1 << 0) + /* * Hardware virtualization extension instructions may fault if a * reboot turns off virtualization while processes are running. -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 4/9] Implement hsave v5 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Alexander Graf 2008-10-30 18:04 ` [PATCH 4/9] Implement hsave v5 Anthony Liguori 2008-10-27 19:09 ` Implement GIF, clgi and stgi v5 Mike Day 2008-10-30 18:02 ` [PATCH 3/9] " Anthony Liguori 2 siblings, 2 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi Implement the hsave MSR, that gives the VCPU a GPA to save the old guest state in. v2 allows userspace to save/restore hsave v4 dummys out the hsave MSR, so we use a host page Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/kvm_svm.h | 1 + arch/x86/kvm/svm.c | 12 ++++++++++++ 2 files changed, 13 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/kvm_svm.h b/arch/x86/kvm/kvm_svm.h index 65ef0fc..40cb128 100644 --- a/arch/x86/kvm/kvm_svm.h +++ b/arch/x86/kvm/kvm_svm.h @@ -41,6 +41,7 @@ struct vcpu_svm { unsigned long host_dr7; u32 *msrpm; + struct vmcb *hsave; }; #endif diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 62bfa2b..eb301fe 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -640,6 +640,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) struct vcpu_svm *svm; struct page *page; struct page *msrpm_pages; + struct page *hsave_page; int err; svm = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL); @@ -665,6 +666,11 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) svm->msrpm = page_address(msrpm_pages); svm_vcpu_init_msrpm(svm->msrpm); + hsave_page = alloc_page(GFP_KERNEL); + if (!hsave_page) + goto uninit; + svm->hsave = page_address(hsave_page); + svm->vmcb = page_address(page); clear_page(svm->vmcb); svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT; @@ -694,6 +700,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_page(pfn_to_page(svm->vmcb_pa >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); + __free_page(virt_to_page(svm->hsave)); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, svm); } @@ -1376,6 +1383,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) case MSR_IA32_LASTINTTOIP: *data = svm->vmcb->save.last_excp_to; break; + case MSR_VM_HSAVE_PA: + *data = 0; + break; default: return kvm_get_msr_common(vcpu, ecx, data); } @@ -1470,6 +1480,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data) pr_unimpl(vcpu, "unimplemented perfctr wrmsr: 0x%x data 0x%llx\n", ecx, data); break; + case MSR_VM_HSAVE_PA: + break; default: return kvm_set_msr_common(vcpu, ecx, data); } -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 2008-10-20 17:04 ` [PATCH 4/9] Implement hsave v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 6/9] Add VMRUN handler v5 Alexander Graf 2008-10-30 18:06 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Anthony Liguori 2008-10-30 18:04 ` [PATCH 4/9] Implement hsave v5 Anthony Liguori 1 sibling, 2 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi This implements the VMLOAD and VMSAVE instructions, that usually surround the VMRUN instructions. Both instructions load / restore the same elements, so we only need to implement them once. v2 fixes CPL checking and replaces memcpy by assignments v3 makes use of the new permission checking Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 58 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index eb301fe..10ad02b 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1242,6 +1242,62 @@ static int nested_svm_do(struct vcpu_svm *svm, return retval; } +static int nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) +{ + to_vmcb->save.fs = from_vmcb->save.fs; + to_vmcb->save.gs = from_vmcb->save.gs; + to_vmcb->save.tr = from_vmcb->save.tr; + to_vmcb->save.ldtr = from_vmcb->save.ldtr; + to_vmcb->save.kernel_gs_base = from_vmcb->save.kernel_gs_base; + to_vmcb->save.star = from_vmcb->save.star; + to_vmcb->save.lstar = from_vmcb->save.lstar; + to_vmcb->save.cstar = from_vmcb->save.cstar; + to_vmcb->save.sfmask = from_vmcb->save.sfmask; + to_vmcb->save.sysenter_cs = from_vmcb->save.sysenter_cs; + to_vmcb->save.sysenter_esp = from_vmcb->save.sysenter_esp; + to_vmcb->save.sysenter_eip = from_vmcb->save.sysenter_eip; + + return 1; +} + +static int nested_svm_vmload(struct vcpu_svm *svm, void *nested_vmcb, + void *arg2, void *opaque) +{ + return nested_svm_vmloadsave((struct vmcb *)nested_vmcb, svm->vmcb); +} + +static int nested_svm_vmsave(struct vcpu_svm *svm, void *nested_vmcb, + void *arg2, void *opaque) +{ + return nested_svm_vmloadsave(svm->vmcb, (struct vmcb *)nested_vmcb); +} + +static int vmload_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) +{ + if (nested_svm_check_permissions(svm)) + return 1; + + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; + skip_emulated_instruction(&svm->vcpu); + + nested_svm_do(svm, svm->vmcb->save.rax, 0, NULL, nested_svm_vmload); + + return 1; +} + +static int vmsave_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) +{ + if (nested_svm_check_permissions(svm)) + return 1; + + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; + skip_emulated_instruction(&svm->vcpu); + + nested_svm_do(svm, svm->vmcb->save.rax, 0, NULL, nested_svm_vmsave); + + return 1; +} + static int stgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) { if (nested_svm_check_permissions(svm)) @@ -1576,8 +1632,8 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm, [SVM_EXIT_SHUTDOWN] = shutdown_interception, [SVM_EXIT_VMRUN] = invalid_op_interception, [SVM_EXIT_VMMCALL] = vmmcall_interception, - [SVM_EXIT_VMLOAD] = invalid_op_interception, - [SVM_EXIT_VMSAVE] = invalid_op_interception, + [SVM_EXIT_VMLOAD] = vmload_interception, + [SVM_EXIT_VMSAVE] = vmsave_interception, [SVM_EXIT_STGI] = stgi_interception, [SVM_EXIT_CLGI] = clgi_interception, [SVM_EXIT_SKINIT] = invalid_op_interception, -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 6/9] Add VMRUN handler v5 2008-10-20 17:04 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 7/9] Add VMEXIT handler and intercepts v5 Alexander Graf ` (2 more replies) 2008-10-30 18:06 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Anthony Liguori 1 sibling, 3 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi This patch implements VMRUN. VMRUN enters a virtual CPU and runs that in the same context as the normal guest CPU would run. So basically it is implemented the same way, a normal CPU would do it. We also prepare all intercepts that get OR'ed with the original intercepts, as we do not allow a level 2 guest to be intercepted less than the first level guest. v2 implements the following improvements: - fixes the CPL check - does not allocate iopm when not used - remembers the host's IF in the HIF bit in the hflags v3: - make use of the new permission checking - add support for V_INTR_MASKING_MASK v4: - use host page backed hsave v5: - remove IOPM merging code Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/kvm_svm.h | 8 ++ arch/x86/kvm/svm.c | 156 +++++++++++++++++++++++++++++++++++++++++++- include/asm-x86/kvm_host.h | 2 + 3 files changed, 164 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/kvm_svm.h b/arch/x86/kvm/kvm_svm.h index 40cb128..c5854e8 100644 --- a/arch/x86/kvm/kvm_svm.h +++ b/arch/x86/kvm/kvm_svm.h @@ -42,6 +42,14 @@ struct vcpu_svm { u32 *msrpm; struct vmcb *hsave; + + u64 nested_vmcb; + + /* These are the merged vectors */ + u32 *nested_msrpm; + + /* gpa pointers to the real vectors */ + u64 nested_vmcb_msrpm; }; #endif diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 10ad02b..c3831b3 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -76,6 +76,11 @@ static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu) return container_of(vcpu, struct vcpu_svm, vcpu); } +static inline bool is_nested(struct vcpu_svm *svm) +{ + return svm->nested_vmcb; +} + static unsigned long iopm_base; struct kvm_ldttss_desc { @@ -615,6 +620,7 @@ static void init_vmcb(struct vcpu_svm *svm) } force_new_asid(&svm->vcpu); + svm->nested_vmcb = 0; svm->vcpu.arch.hflags = HF_GIF_MASK; } @@ -641,6 +647,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) struct page *page; struct page *msrpm_pages; struct page *hsave_page; + struct page *nested_msrpm_pages; int err; svm = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL); @@ -663,6 +670,11 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) msrpm_pages = alloc_pages(GFP_KERNEL, MSRPM_ALLOC_ORDER); if (!msrpm_pages) goto uninit; + + nested_msrpm_pages = alloc_pages(GFP_KERNEL, MSRPM_ALLOC_ORDER); + if (!nested_msrpm_pages) + goto uninit; + svm->msrpm = page_address(msrpm_pages); svm_vcpu_init_msrpm(svm->msrpm); @@ -671,6 +683,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) goto uninit; svm->hsave = page_address(hsave_page); + svm->nested_msrpm = page_address(nested_msrpm_pages); + svm->vmcb = page_address(page); clear_page(svm->vmcb); svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT; @@ -701,6 +715,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) __free_page(pfn_to_page(svm->vmcb_pa >> PAGE_SHIFT)); __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); __free_page(virt_to_page(svm->hsave)); + __free_pages(virt_to_page(svm->nested_msrpm), MSRPM_ALLOC_ORDER); kvm_vcpu_uninit(vcpu); kmem_cache_free(kvm_vcpu_cache, svm); } @@ -1242,6 +1257,122 @@ static int nested_svm_do(struct vcpu_svm *svm, return retval; } + +static int nested_svm_vmrun_msrpm(struct vcpu_svm *svm, void *arg1, + void *arg2, void *opaque) +{ + int i; + u32 *nested_msrpm = (u32*)arg1; + for (i=0; i< PAGE_SIZE * (1 << MSRPM_ALLOC_ORDER) / 4; i++) + svm->nested_msrpm[i] = svm->msrpm[i] | nested_msrpm[i]; + svm->vmcb->control.msrpm_base_pa = __pa(svm->nested_msrpm); + + return 0; +} + +static int nested_svm_vmrun(struct vcpu_svm *svm, void *arg1, + void *arg2, void *opaque) +{ + struct vmcb *nested_vmcb = (struct vmcb *)arg1; + struct vmcb *hsave = svm->hsave; + + /* nested_vmcb is our indicator if nested SVM is activated */ + svm->nested_vmcb = svm->vmcb->save.rax; + + /* Clear internal status */ + svm->vcpu.arch.exception.pending = false; + + /* Save the old vmcb, so we don't need to pick what we save, but + can restore everything when a VMEXIT occurs */ + memcpy(hsave, svm->vmcb, sizeof(struct vmcb)); + /* We need to remember the original CR3 in the SPT case */ + if (!npt_enabled) + hsave->save.cr3 = svm->vcpu.arch.cr3; + hsave->save.rip = svm->next_rip; + + if (svm->vmcb->save.rflags & X86_EFLAGS_IF) + svm->vcpu.arch.hflags |= HF_HIF_MASK; + else + svm->vcpu.arch.hflags &= ~HF_HIF_MASK; + + /* Load the nested guest state */ + svm->vmcb->save.es = nested_vmcb->save.es; + svm->vmcb->save.cs = nested_vmcb->save.cs; + svm->vmcb->save.ss = nested_vmcb->save.ss; + svm->vmcb->save.ds = nested_vmcb->save.ds; + svm->vmcb->save.gdtr = nested_vmcb->save.gdtr; + svm->vmcb->save.idtr = nested_vmcb->save.idtr; + svm->vmcb->save.rflags = nested_vmcb->save.rflags; + svm_set_efer(&svm->vcpu, nested_vmcb->save.efer); + svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0); + svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4); + if (npt_enabled) { + svm->vmcb->save.cr3 = nested_vmcb->save.cr3; + svm->vcpu.arch.cr3 = nested_vmcb->save.cr3; + } else { + kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); + kvm_mmu_reset_context(&svm->vcpu); + } + svm->vmcb->save.cr2 = nested_vmcb->save.cr2; + kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, nested_vmcb->save.rax); + kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, nested_vmcb->save.rsp); + kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, nested_vmcb->save.rip); + /* In case we don't even reach vcpu_run, the fields are not updated */ + svm->vmcb->save.rax = nested_vmcb->save.rax; + svm->vmcb->save.rsp = nested_vmcb->save.rsp; + svm->vmcb->save.rip = nested_vmcb->save.rip; + svm->vmcb->save.dr7 = nested_vmcb->save.dr7; + svm->vmcb->save.dr6 = nested_vmcb->save.dr6; + svm->vmcb->save.cpl = nested_vmcb->save.cpl; + + /* We don't want a nested guest to be more powerful than the guest, + so all intercepts are ORed */ + svm->vmcb->control.intercept_cr_read |= + nested_vmcb->control.intercept_cr_read; + svm->vmcb->control.intercept_cr_write |= + nested_vmcb->control.intercept_cr_write; + svm->vmcb->control.intercept_dr_read |= + nested_vmcb->control.intercept_dr_read; + svm->vmcb->control.intercept_dr_write |= + nested_vmcb->control.intercept_dr_write; + svm->vmcb->control.intercept_exceptions |= + nested_vmcb->control.intercept_exceptions; + + svm->vmcb->control.intercept |= nested_vmcb->control.intercept; + + svm->nested_vmcb_msrpm = nested_vmcb->control.msrpm_base_pa; + + force_new_asid(&svm->vcpu); + svm->vmcb->control.exit_int_info = nested_vmcb->control.exit_int_info; + svm->vmcb->control.exit_int_info_err = nested_vmcb->control.exit_int_info_err; + svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; + if (nested_vmcb->control.int_ctl & V_IRQ_MASK) { + nsvm_printk("nSVM Injecting Interrupt: 0x%x\n", + nested_vmcb->control.int_ctl); + } + if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) + svm->vcpu.arch.hflags |= HF_VINTR_MASK; + else + svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; + + nsvm_printk("nSVM exit_int_info: 0x%x | int_state: 0x%x\n", + nested_vmcb->control.exit_int_info, + nested_vmcb->control.int_state); + + svm->vmcb->control.int_vector = nested_vmcb->control.int_vector; + svm->vmcb->control.int_state = nested_vmcb->control.int_state; + svm->vmcb->control.tsc_offset += nested_vmcb->control.tsc_offset; + if (nested_vmcb->control.event_inj & SVM_EVTINJ_VALID) + nsvm_printk("Injecting Event: 0x%x\n", + nested_vmcb->control.event_inj); + svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; + svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; + + svm->vcpu.arch.hflags |= HF_GIF_MASK; + + return 1; +} + static int nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) { to_vmcb->save.fs = from_vmcb->save.fs; @@ -1298,6 +1429,26 @@ static int vmsave_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) return 1; } +static int vmrun_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) +{ + nsvm_printk("VMrun\n"); + if (nested_svm_check_permissions(svm)) + return 1; + + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; + skip_emulated_instruction(&svm->vcpu); + + if (nested_svm_do(svm, svm->vmcb->save.rax, 0, + NULL, nested_svm_vmrun)) + return 1; + + if (nested_svm_do(svm, svm->vmcb->control.msrpm_base_pa, 0, + NULL, nested_svm_vmrun_msrpm)) + return 1; + + return 1; +} + static int stgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) { if (nested_svm_check_permissions(svm)) @@ -1630,7 +1781,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm, [SVM_EXIT_MSR] = msr_interception, [SVM_EXIT_TASK_SWITCH] = task_switch_interception, [SVM_EXIT_SHUTDOWN] = shutdown_interception, - [SVM_EXIT_VMRUN] = invalid_op_interception, + [SVM_EXIT_VMRUN] = vmrun_interception, [SVM_EXIT_VMMCALL] = vmmcall_interception, [SVM_EXIT_VMLOAD] = vmload_interception, [SVM_EXIT_VMSAVE] = vmsave_interception, @@ -1937,7 +2088,8 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run) svm->host_cr2 = kvm_read_cr2(); svm->host_dr6 = read_dr6(); svm->host_dr7 = read_dr7(); - svm->vmcb->save.cr2 = vcpu->arch.cr2; + if (!is_nested(svm)) + svm->vmcb->save.cr2 = vcpu->arch.cr2; /* required for live migration with NPT */ if (npt_enabled) svm->vmcb->save.cr3 = vcpu->arch.cr3; diff --git a/include/asm-x86/kvm_host.h b/include/asm-x86/kvm_host.h index 421abf7..7d28894 100644 --- a/include/asm-x86/kvm_host.h +++ b/include/asm-x86/kvm_host.h @@ -736,6 +736,8 @@ enum { }; #define HF_GIF_MASK (1 << 0) +#define HF_HIF_MASK (1 << 1) +#define HF_VINTR_MASK (1 << 2) /* * Hardware virtualization extension instructions may fault if a -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 7/9] Add VMEXIT handler and intercepts v5 2008-10-20 17:04 ` [PATCH 6/9] Add VMRUN handler v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 8/9] allow read access to MSR_VM_VR Alexander Graf 2008-10-28 18:38 ` Add VMRUN handler v5 Mike Day 2008-11-18 14:14 ` [PATCH 6/9] " Muli Ben-Yehuda 2 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi This adds the #VMEXIT intercept, so we return to the level 1 guest when something happens in the level 2 guest that should return to the level 1 guest. v2 implements HIF handling and cleans up exception interception v3 adds support for V_INTR_MASKING_MASK v4 uses the host page hsave v5 removes IOPM merging code Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 295 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 295 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index c3831b3..720b610 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -71,6 +71,13 @@ module_param(npt, int, S_IRUGO); static void kvm_reput_irq(struct vcpu_svm *svm); static void svm_flush_tlb(struct kvm_vcpu *vcpu); +static int nested_svm_exit_handled(struct vcpu_svm *svm, bool kvm_override); +static int nested_svm_vmexit(struct vcpu_svm *svm); +static int nested_svm_vmsave(struct vcpu_svm *svm, void *nested_vmcb, + void *arg2, void *opaque); +static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, + bool has_error_code, u32 error_code); + static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu) { return container_of(vcpu, struct vcpu_svm, vcpu); @@ -220,6 +227,11 @@ static void svm_queue_exception(struct kvm_vcpu *vcpu, unsigned nr, { struct vcpu_svm *svm = to_svm(vcpu); + /* If we are within a nested VM we'd better #VMEXIT and let the + guest handle the exception */ + if (nested_svm_check_exception(svm, nr, has_error_code, error_code)) + return; + svm->vmcb->control.event_inj = nr | SVM_EVTINJ_VALID | (has_error_code ? SVM_EVTINJ_VALID_ERR : 0) @@ -1197,6 +1209,46 @@ static int nested_svm_check_permissions(struct vcpu_svm *svm) return 0; } +static int nested_svm_check_exception(struct vcpu_svm *svm, unsigned nr, + bool has_error_code, u32 error_code) +{ + if (is_nested(svm)) { + svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + nr; + svm->vmcb->control.exit_code_hi = 0; + svm->vmcb->control.exit_info_1 = error_code; + svm->vmcb->control.exit_info_2 = svm->vcpu.arch.cr2; + if (nested_svm_exit_handled(svm, false)) { + nsvm_printk("VMexit -> EXCP 0x%x\n", nr); + + nested_svm_vmexit(svm); + return 1; + } + } + + return 0; +} + +static inline int nested_svm_intr(struct vcpu_svm *svm) +{ + if (is_nested(svm)) { + if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK)) + return 0; + + if (!(svm->vcpu.arch.hflags & HF_HIF_MASK)) + return 0; + + svm->vmcb->control.exit_code = SVM_EXIT_INTR; + + if (nested_svm_exit_handled(svm, false)) { + nsvm_printk("VMexit -> INTR\n"); + nested_svm_vmexit(svm); + return 1; + } + } + + return 0; +} + static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 gpa) { struct page *page; @@ -1257,6 +1309,230 @@ static int nested_svm_do(struct vcpu_svm *svm, return retval; } +static int nested_svm_exit_handled_real(struct vcpu_svm *svm, + void *arg1, + void *arg2, + void *opaque) +{ + struct vmcb *nested_vmcb = (struct vmcb *)arg1; + bool kvm_overrides = *(bool *)opaque; + u32 exit_code = svm->vmcb->control.exit_code; + + if (kvm_overrides) { + switch (exit_code) { + case SVM_EXIT_INTR: + case SVM_EXIT_NMI: + return 0; + /* For now we are always handling NPFs when using them */ + case SVM_EXIT_NPF: + if (npt_enabled) + return 0; + break; + /* When we're shadowing, trap PFs */ + case SVM_EXIT_EXCP_BASE + PF_VECTOR: + if (!npt_enabled) + return 0; + break; + default: + break; + } + } + + switch (exit_code) { + case SVM_EXIT_READ_CR0 ... SVM_EXIT_READ_CR8: { + u32 cr_bits = 1 << (exit_code - SVM_EXIT_READ_CR0); + if (nested_vmcb->control.intercept_cr_read & cr_bits) + return 1; + break; + } + case SVM_EXIT_WRITE_CR0 ... SVM_EXIT_WRITE_CR8: { + u32 cr_bits = 1 << (exit_code - SVM_EXIT_WRITE_CR0); + if (nested_vmcb->control.intercept_cr_write & cr_bits) + return 1; + break; + } + case SVM_EXIT_READ_DR0 ... SVM_EXIT_READ_DR7: { + u32 dr_bits = 1 << (exit_code - SVM_EXIT_READ_DR0); + if (nested_vmcb->control.intercept_dr_read & dr_bits) + return 1; + break; + } + case SVM_EXIT_WRITE_DR0 ... SVM_EXIT_WRITE_DR7: { + u32 dr_bits = 1 << (exit_code - SVM_EXIT_WRITE_DR0); + if (nested_vmcb->control.intercept_dr_write & dr_bits) + return 1; + break; + } + case SVM_EXIT_EXCP_BASE ... SVM_EXIT_EXCP_BASE + 0x1f: { + u32 excp_bits = 1 << (exit_code - SVM_EXIT_EXCP_BASE); + if (nested_vmcb->control.intercept_exceptions & excp_bits) + return 1; + break; + } + default: { + u64 exit_bits = 1ULL << (exit_code - SVM_EXIT_INTR); + nsvm_printk("exit code: 0x%x\n", exit_code); + if (nested_vmcb->control.intercept & exit_bits) + return 1; + } + } + + return 0; +} + +static int nested_svm_exit_handled_msr(struct vcpu_svm *svm, + void *arg1, void *arg2, + void *opaque) +{ + struct vmcb *nested_vmcb = (struct vmcb *)arg1; + u8 *msrpm = (u8 *)arg2; + u32 t0, t1; + u32 msr = svm->vcpu.arch.regs[VCPU_REGS_RCX]; + u32 param = svm->vmcb->control.exit_info_1 & 1; + + if (!(nested_vmcb->control.intercept & (1ULL << INTERCEPT_MSR_PROT))) + return 0; + + switch(msr) { + case 0 ... 0x1fff: + t0 = (msr * 2) % 8; + t1 = msr / 8; + break; + case 0xc0000000 ... 0xc0001fff: + t0 = (8192 + msr - 0xc0000000) * 2; + t1 = (t0 / 8); + t0 %= 8; + break; + case 0xc0010000 ... 0xc0011fff: + t0 = (16384 + msr - 0xc0010000) * 2; + t1 = (t0 / 8); + t0 %= 8; + break; + default: + return 1; + break; + } + if (msrpm[t1] & ((1 << param) << t0)) + return 1; + + return 0; +} + +static int nested_svm_exit_handled(struct vcpu_svm *svm, bool kvm_override) +{ + bool k = kvm_override; + + switch (svm->vmcb->control.exit_code) { + case SVM_EXIT_MSR: + return nested_svm_do(svm, svm->nested_vmcb, + svm->nested_vmcb_msrpm, NULL, + nested_svm_exit_handled_msr); + default: break; + } + + return nested_svm_do(svm, svm->nested_vmcb, 0, &k, + nested_svm_exit_handled_real); +} + +static int nested_svm_vmexit_real(struct vcpu_svm *svm, void *arg1, + void *arg2, void *opaque) +{ + struct vmcb *nested_vmcb = (struct vmcb *)arg1; + struct vmcb *hsave = svm->hsave; + u64 nested_save[] = { nested_vmcb->save.cr0, + nested_vmcb->save.cr3, + nested_vmcb->save.cr4, + nested_vmcb->save.efer, + nested_vmcb->control.intercept_cr_read, + nested_vmcb->control.intercept_cr_write, + nested_vmcb->control.intercept_dr_read, + nested_vmcb->control.intercept_dr_write, + nested_vmcb->control.intercept_exceptions, + nested_vmcb->control.intercept, + nested_vmcb->control.msrpm_base_pa, + nested_vmcb->control.iopm_base_pa, + nested_vmcb->control.tsc_offset }; + + /* Give the current vmcb to the guest */ + memcpy(nested_vmcb, svm->vmcb, sizeof(struct vmcb)); + nested_vmcb->save.cr0 = nested_save[0]; + if (!npt_enabled) + nested_vmcb->save.cr3 = nested_save[1]; + nested_vmcb->save.cr4 = nested_save[2]; + nested_vmcb->save.efer = nested_save[3]; + nested_vmcb->control.intercept_cr_read = nested_save[4]; + nested_vmcb->control.intercept_cr_write = nested_save[5]; + nested_vmcb->control.intercept_dr_read = nested_save[6]; + nested_vmcb->control.intercept_dr_write = nested_save[7]; + nested_vmcb->control.intercept_exceptions = nested_save[8]; + nested_vmcb->control.intercept = nested_save[9]; + nested_vmcb->control.msrpm_base_pa = nested_save[10]; + nested_vmcb->control.iopm_base_pa = nested_save[11]; + nested_vmcb->control.tsc_offset = nested_save[12]; + + /* We always set V_INTR_MASKING and remember the old value in hflags */ + if (!(svm->vcpu.arch.hflags & HF_VINTR_MASK)) + nested_vmcb->control.int_ctl &= ~V_INTR_MASKING_MASK; + + if ((nested_vmcb->control.int_ctl & V_IRQ_MASK) && + (nested_vmcb->control.int_vector)) { + nsvm_printk("WARNING: IRQ 0x%x still enabled on #VMEXIT\n", + nested_vmcb->control.int_vector); + } + + /* Restore the original control entries */ + svm->vmcb->control = hsave->control; + + /* Flush the virtual TLB */ + force_new_asid(&svm->vcpu); + + /* Kill any pending exceptions */ + if (svm->vcpu.arch.exception.pending == true) + nsvm_printk("WARNING: Pending Exception\n"); + svm->vcpu.arch.exception.pending = false; + + /* Restore selected save entries */ + svm->vmcb->save.es = hsave->save.es; + svm->vmcb->save.cs = hsave->save.cs; + svm->vmcb->save.ss = hsave->save.ss; + svm->vmcb->save.ds = hsave->save.ds; + svm->vmcb->save.gdtr = hsave->save.gdtr; + svm->vmcb->save.idtr = hsave->save.idtr; + svm->vmcb->save.rflags = hsave->save.rflags; + svm_set_efer(&svm->vcpu, hsave->save.efer); + svm_set_cr0(&svm->vcpu, hsave->save.cr0 | X86_CR0_PE); + svm_set_cr4(&svm->vcpu, hsave->save.cr4); + if (npt_enabled) { + svm->vmcb->save.cr3 = hsave->save.cr3; + svm->vcpu.arch.cr3 = hsave->save.cr3; + } else { + kvm_set_cr3(&svm->vcpu, hsave->save.cr3); + } + kvm_mmu_reset_context(&svm->vcpu); + kvm_mmu_load(&svm->vcpu); + kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, hsave->save.rax); + kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, hsave->save.rsp); + kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, hsave->save.rip); + svm->vmcb->save.dr7 = 0; + svm->vmcb->save.cpl = 0; + svm->vmcb->control.exit_int_info = 0; + + svm->vcpu.arch.hflags &= ~HF_GIF_MASK; + /* Exit nested SVM mode */ + svm->nested_vmcb = 0; + + return 0; +} + +static int nested_svm_vmexit(struct vcpu_svm *svm) +{ + nsvm_printk("VMexit\n"); + if (nested_svm_do(svm, svm->nested_vmcb, 0, + NULL, nested_svm_vmexit_real)) + return 1; + + return 0; +} static int nested_svm_vmrun_msrpm(struct vcpu_svm *svm, void *arg1, void *arg2, void *opaque) @@ -1802,6 +2078,17 @@ static int handle_exit(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu) KVMTRACE_3D(VMEXIT, vcpu, exit_code, (u32)svm->vmcb->save.rip, (u32)((u64)svm->vmcb->save.rip >> 32), entryexit); + if (is_nested(svm)) { + nsvm_printk("nested handle_exit: 0x%x | 0x%lx | 0x%lx | 0x%lx\n", + exit_code, svm->vmcb->control.exit_info_1, + svm->vmcb->control.exit_info_2, svm->vmcb->save.rip); + if (nested_svm_exit_handled(svm, true)) { + nested_svm_vmexit(svm); + nsvm_printk("-> #VMEXIT\n"); + return 1; + } + } + if (npt_enabled) { int mmu_reload = 0; if ((vcpu->arch.cr0 ^ svm->vmcb->save.cr0) & X86_CR0_PG) { @@ -1889,6 +2176,8 @@ static void svm_set_irq(struct kvm_vcpu *vcpu, int irq) { struct vcpu_svm *svm = to_svm(vcpu); + nested_svm_intr(svm); + svm_inject_irq(svm, irq); } @@ -1934,6 +2223,9 @@ static void svm_intr_assist(struct kvm_vcpu *vcpu) if (!kvm_cpu_has_interrupt(vcpu)) goto out; + if (nested_svm_intr(svm)) + goto out; + if (!(svm->vcpu.arch.hflags & HF_GIF_MASK)) goto out; @@ -1986,6 +2278,9 @@ static void do_interrupt_requests(struct kvm_vcpu *vcpu, struct vcpu_svm *svm = to_svm(vcpu); struct vmcb_control_area *control = &svm->vmcb->control; + if (nested_svm_intr(svm)) + return; + svm->vcpu.arch.interrupt_window_open = (!(control->int_state & SVM_INTERRUPT_SHADOW_MASK) && (svm->vmcb->save.rflags & X86_EFLAGS_IF) && -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 8/9] allow read access to MSR_VM_VR 2008-10-20 17:04 ` [PATCH 7/9] Add VMEXIT handler and intercepts v5 Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-20 17:04 ` [PATCH 9/9] Allow setting the SVME bit v5 Alexander Graf 2008-10-30 19:15 ` [PATCH 8/9] allow read access to MSR_VM_VR Anthony Liguori 0 siblings, 2 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi, Joerg Roedel From: Joerg Roedel <joerg.roedel@amd.com> KVM tries to read the VM_CR MSR to find out if SVM was disabled by the BIOS. So implement read support for this MSR to make nested SVM running. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 3 +++ 1 files changed, 3 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 720b610..4582699 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1869,6 +1869,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) case MSR_VM_HSAVE_PA: *data = 0; break; + case MSR_VM_CR: + *data = 0; + break; default: return kvm_get_msr_common(vcpu, ecx, data); } -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* [PATCH 9/9] Allow setting the SVME bit v5 2008-10-20 17:04 ` [PATCH 8/9] allow read access to MSR_VM_VR Alexander Graf @ 2008-10-20 17:04 ` Alexander Graf 2008-10-29 13:58 ` Joerg Roedel 2008-10-30 19:16 ` Anthony Liguori 2008-10-30 19:15 ` [PATCH 8/9] allow read access to MSR_VM_VR Anthony Liguori 1 sibling, 2 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-20 17:04 UTC (permalink / raw) To: kvm; +Cc: joro, anthony, avi Normally setting the SVME bit in EFER is not allowed, as we did not support SVM. Not since we do, we should also allow enabling SVM mode. v2 comes as last patch, so we don't enable half-ready code v4 introduces a module option to enable SVM Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 6 ++++++ 1 files changed, 6 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 4582699..1e63860 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -68,6 +68,9 @@ static int npt = 1; module_param(npt, int, S_IRUGO); +static int nested = 0; +module_param(nested, int, S_IRUGO); + static void kvm_reput_irq(struct vcpu_svm *svm); static void svm_flush_tlb(struct kvm_vcpu *vcpu); @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) if (boot_cpu_has(X86_FEATURE_NX)) kvm_enable_efer_bits(EFER_NX); + if (nested) + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); + for_each_online_cpu(cpu) { r = svm_cpu_init(cpu); if (r) -- 1.5.6 ^ permalink raw reply related [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-20 17:04 ` [PATCH 9/9] Allow setting the SVME bit v5 Alexander Graf @ 2008-10-29 13:58 ` Joerg Roedel 2008-10-29 14:03 ` Alexander Graf 2008-10-29 14:07 ` Avi Kivity 2008-10-30 19:16 ` Anthony Liguori 1 sibling, 2 replies; 53+ messages in thread From: Joerg Roedel @ 2008-10-29 13:58 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, anthony, avi On Mon, Oct 20, 2008 at 07:04:50PM +0200, Alexander Graf wrote: > Normally setting the SVME bit in EFER is not allowed, as we did > not support SVM. Not since we do, we should also allow enabling > SVM mode. > > v2 comes as last patch, so we don't enable half-ready code > v4 introduces a module option to enable SVM > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 6 ++++++ > 1 files changed, 6 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 4582699..1e63860 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -68,6 +68,9 @@ static int npt = 1; > > module_param(npt, int, S_IRUGO); > > +static int nested = 0; > +module_param(nested, int, S_IRUGO); > + > static void kvm_reput_irq(struct vcpu_svm *svm); > static void svm_flush_tlb(struct kvm_vcpu *vcpu); > > @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) > if (boot_cpu_has(X86_FEATURE_NX)) > kvm_enable_efer_bits(EFER_NX); > > + if (nested) > + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); > + > for_each_online_cpu(cpu) { > r = svm_cpu_init(cpu); > if (r) What I think is missing here is some kind of userspace interface. If we do it this way (with a kernel parameter) we should expose to userspace if nested svm is enabled. This way userspace knows whether it has to set the SVM CPUID bit or not. ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-29 13:58 ` Joerg Roedel @ 2008-10-29 14:03 ` Alexander Graf 2008-10-29 14:17 ` Avi Kivity 2008-10-29 14:07 ` Avi Kivity 1 sibling, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-29 14:03 UTC (permalink / raw) To: Joerg Roedel; +Cc: kvm, anthony, avi On 29.10.2008, at 14:58, Joerg Roedel wrote: > On Mon, Oct 20, 2008 at 07:04:50PM +0200, Alexander Graf wrote: >> Normally setting the SVME bit in EFER is not allowed, as we did >> not support SVM. Not since we do, we should also allow enabling >> SVM mode. >> >> v2 comes as last patch, so we don't enable half-ready code >> v4 introduces a module option to enable SVM >> >> Signed-off-by: Alexander Graf <agraf@suse.de> >> --- >> arch/x86/kvm/svm.c | 6 ++++++ >> 1 files changed, 6 insertions(+), 0 deletions(-) >> >> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c >> index 4582699..1e63860 100644 >> --- a/arch/x86/kvm/svm.c >> +++ b/arch/x86/kvm/svm.c >> @@ -68,6 +68,9 @@ static int npt = 1; >> >> module_param(npt, int, S_IRUGO); >> >> +static int nested = 0; >> +module_param(nested, int, S_IRUGO); >> + >> static void kvm_reput_irq(struct vcpu_svm *svm); >> static void svm_flush_tlb(struct kvm_vcpu *vcpu); >> >> @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) >> if (boot_cpu_has(X86_FEATURE_NX)) >> kvm_enable_efer_bits(EFER_NX); >> >> + if (nested) >> + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); >> + >> for_each_online_cpu(cpu) { >> r = svm_cpu_init(cpu); >> if (r) > > > What I think is missing here is some kind of userspace interface. If > we > do it this way (with a kernel parameter) we should expose to userspace > if nested svm is enabled. This way userspace knows whether it has to > set > the SVM CPUID bit or not. To an Operating System, the SVM CPUID should not imply that SVM works. The firmware can easily disable it by locking the SVM capability MSR to "off". Maybe we could expose this behavior in the nested=0 case. I don't believe we need a userspace interface for now. The userspace part does not need to know what capabilities the virtual CPU has - that's what the CPUID masking in the kvm code is for IMHO. ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-29 14:03 ` Alexander Graf @ 2008-10-29 14:17 ` Avi Kivity 0 siblings, 0 replies; 53+ messages in thread From: Avi Kivity @ 2008-10-29 14:17 UTC (permalink / raw) To: Alexander Graf; +Cc: Joerg Roedel, kvm, anthony Alexander Graf wrote: > I don't believe we need a userspace interface for now. The userspace > part does not need to know what capabilities the virtual CPU has - > that's what the CPUID masking in the kvm code is for IMHO. It's the other way round. The kernel exposes which features are available, and userspace decides what to present to the guest. Kernel masking means userspace doesn't really know what the guest sees. The existing masking in the kernel is due to bugs. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-29 13:58 ` Joerg Roedel 2008-10-29 14:03 ` Alexander Graf @ 2008-10-29 14:07 ` Avi Kivity 1 sibling, 0 replies; 53+ messages in thread From: Avi Kivity @ 2008-10-29 14:07 UTC (permalink / raw) To: Joerg Roedel; +Cc: Alexander Graf, kvm, anthony Joerg Roedel wrote: > On Mon, Oct 20, 2008 at 07:04:50PM +0200, Alexander Graf wrote: > >> Normally setting the SVME bit in EFER is not allowed, as we did >> not support SVM. Not since we do, we should also allow enabling >> SVM mode. >> >> v2 comes as last patch, so we don't enable half-ready code >> v4 introduces a module option to enable SVM >> >> Signed-off-by: Alexander Graf <agraf@suse.de> >> --- >> arch/x86/kvm/svm.c | 6 ++++++ >> 1 files changed, 6 insertions(+), 0 deletions(-) >> >> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c >> index 4582699..1e63860 100644 >> --- a/arch/x86/kvm/svm.c >> +++ b/arch/x86/kvm/svm.c >> @@ -68,6 +68,9 @@ static int npt = 1; >> >> module_param(npt, int, S_IRUGO); >> >> +static int nested = 0; >> +module_param(nested, int, S_IRUGO); >> + >> static void kvm_reput_irq(struct vcpu_svm *svm); >> static void svm_flush_tlb(struct kvm_vcpu *vcpu); >> >> @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) >> if (boot_cpu_has(X86_FEATURE_NX)) >> kvm_enable_efer_bits(EFER_NX); >> >> + if (nested) >> + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); >> + >> for_each_online_cpu(cpu) { >> r = svm_cpu_init(cpu); >> if (r) >> > > > What I think is missing here is some kind of userspace interface. If we > do it this way (with a kernel parameter) we should expose to userspace > if nested svm is enabled. This way userspace knows whether it has to set > the SVM CPUID bit or not. > Right. There's the KVM_GET_SUPPORTED_CPUID thing that can be used for this. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-20 17:04 ` [PATCH 9/9] Allow setting the SVME bit v5 Alexander Graf 2008-10-29 13:58 ` Joerg Roedel @ 2008-10-30 19:16 ` Anthony Liguori 2008-10-30 19:24 ` Avi Kivity 2008-10-30 20:44 ` Alexander Graf 1 sibling, 2 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:16 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > Normally setting the SVME bit in EFER is not allowed, as we did > not support SVM. Not since we do, we should also allow enabling > SVM mode. > > v2 comes as last patch, so we don't enable half-ready code > v4 introduces a module option to enable SVM > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 6 ++++++ > 1 files changed, 6 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 4582699..1e63860 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -68,6 +68,9 @@ static int npt = 1; > > module_param(npt, int, S_IRUGO); > > +static int nested = 0; > +module_param(nested, int, S_IRUGO); > Instead of doing this as a module parameter, we could either avoid advertising SVM support in cpuid from userspace or we could disable SVM in the BIOS. I really like the later approach because it gives a better error message within the guest. We could also keep the module parameter but it would be nice to support disabling SVM in userspace too. Regards, Anthony Liguori > static void kvm_reput_irq(struct vcpu_svm *svm); > static void svm_flush_tlb(struct kvm_vcpu *vcpu); > > @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) > if (boot_cpu_has(X86_FEATURE_NX)) > kvm_enable_efer_bits(EFER_NX); > > + if (nested) > + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); > + > for_each_online_cpu(cpu) { > r = svm_cpu_init(cpu); > if (r) > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-30 19:16 ` Anthony Liguori @ 2008-10-30 19:24 ` Avi Kivity 2008-10-30 20:46 ` Alexander Graf 2008-10-30 20:44 ` Alexander Graf 1 sibling, 1 reply; 53+ messages in thread From: Avi Kivity @ 2008-10-30 19:24 UTC (permalink / raw) To: Anthony Liguori; +Cc: Alexander Graf, kvm, joro Anthony Liguori wrote: >> +static int nested = 0; >> +module_param(nested, int, S_IRUGO); >> > > Instead of doing this as a module parameter, we could either avoid > advertising SVM support in cpuid from userspace or we could disable > SVM in the BIOS. > > I really like the later approach because it gives a better error > message within the guest. We could also keep the module parameter but > it would be nice to support disabling SVM in userspace too. We do need to expose svme in KVM_GET_SUPPORTED_CPUID, so userspace can enable it if it wants. But I'd like to keep the module parameter (defaulting to off) until this has been tested better. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-30 19:24 ` Avi Kivity @ 2008-10-30 20:46 ` Alexander Graf 0 siblings, 0 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-30 20:46 UTC (permalink / raw) To: Avi Kivity; +Cc: Anthony Liguori, kvm, joro On 30.10.2008, at 20:24, Avi Kivity wrote: > Anthony Liguori wrote: >>> +static int nested = 0; >>> +module_param(nested, int, S_IRUGO); >>> >> >> Instead of doing this as a module parameter, we could either avoid >> advertising SVM support in cpuid from userspace or we could disable >> SVM in the BIOS. >> >> I really like the later approach because it gives a better error >> message within the guest. We could also keep the module parameter >> but it would be nice to support disabling SVM in userspace too. > > We do need to expose svme in KVM_GET_SUPPORTED_CPUID, so userspace > can enable it if it wants. But I'd like to keep the module > parameter (defaulting to off) until this has been tested better. Sure, but we can do both. Joerg was bugging me before a lot because he wanted to have SVM switched on on a per-guest basis. This would basically enable us to do that - still guarded by the kernel module parameter. Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-30 19:16 ` Anthony Liguori 2008-10-30 19:24 ` Avi Kivity @ 2008-10-30 20:44 ` Alexander Graf 2008-10-30 20:52 ` Anthony Liguori 1 sibling, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-30 20:44 UTC (permalink / raw) To: Anthony Liguori; +Cc: kvm, joro, avi On 30.10.2008, at 20:16, Anthony Liguori wrote: > Alexander Graf wrote: >> Normally setting the SVME bit in EFER is not allowed, as we did >> not support SVM. Not since we do, we should also allow enabling >> SVM mode. >> >> v2 comes as last patch, so we don't enable half-ready code >> v4 introduces a module option to enable SVM >> >> Signed-off-by: Alexander Graf <agraf@suse.de> >> --- >> arch/x86/kvm/svm.c | 6 ++++++ >> 1 files changed, 6 insertions(+), 0 deletions(-) >> >> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c >> index 4582699..1e63860 100644 >> --- a/arch/x86/kvm/svm.c >> +++ b/arch/x86/kvm/svm.c >> @@ -68,6 +68,9 @@ static int npt = 1; >> module_param(npt, int, S_IRUGO); >> +static int nested = 0; >> +module_param(nested, int, S_IRUGO); >> > > Instead of doing this as a module parameter, we could either avoid > advertising SVM support in cpuid from userspace or we could disable > SVM in the BIOS. > > I really like the later approach because it gives a better error > message within the guest. We could also keep the module parameter > but it would be nice to support disabling SVM in userspace too. That's a really good idea. We'd only have to check the MSR value on EFER set, to know if setting SVME is allowed. That would allow for a really easy control over the SVM feature on a per-guest basis and from userspace. Cool, Alex > Regards, > > Anthony Liguori > >> static void kvm_reput_irq(struct vcpu_svm *svm); >> static void svm_flush_tlb(struct kvm_vcpu *vcpu); >> @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) >> if (boot_cpu_has(X86_FEATURE_NX)) >> kvm_enable_efer_bits(EFER_NX); >> + if (nested) >> + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); >> + >> for_each_online_cpu(cpu) { >> r = svm_cpu_init(cpu); >> if (r) >> > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-30 20:44 ` Alexander Graf @ 2008-10-30 20:52 ` Anthony Liguori 2008-11-02 9:11 ` Avi Kivity 0 siblings, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 20:52 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > > On 30.10.2008, at 20:16, Anthony Liguori wrote: >> >> Instead of doing this as a module parameter, we could either avoid >> advertising SVM support in cpuid from userspace or we could disable >> SVM in the BIOS. >> >> I really like the later approach because it gives a better error >> message within the guest. We could also keep the module parameter >> but it would be nice to support disabling SVM in userspace too. > > That's a really good idea. We'd only have to check the MSR value on > EFER set, to know if setting SVME is allowed. That would allow for a > really easy control over the SVM feature on a per-guest basis and from > userspace. And with Gleb's BIOS configuration changes that are now upstream, it's really easy to communicate to the BIOS whether SVM should be enabled. Regards, Anthony LIguori > Cool, > > Alex > >> Regards, >> >> Anthony Liguori >> >>> static void kvm_reput_irq(struct vcpu_svm *svm); >>> static void svm_flush_tlb(struct kvm_vcpu *vcpu); >>> @@ -457,6 +460,9 @@ static __init int svm_hardware_setup(void) >>> if (boot_cpu_has(X86_FEATURE_NX)) >>> kvm_enable_efer_bits(EFER_NX); >>> + if (nested) >>> + kvm_enable_efer_bits(MSR_EFER_SVME_MASK); >>> + >>> for_each_online_cpu(cpu) { >>> r = svm_cpu_init(cpu); >>> if (r) >>> >> > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-10-30 20:52 ` Anthony Liguori @ 2008-11-02 9:11 ` Avi Kivity 2008-11-03 7:37 ` Alexander Graf 0 siblings, 1 reply; 53+ messages in thread From: Avi Kivity @ 2008-11-02 9:11 UTC (permalink / raw) To: Anthony Liguori; +Cc: Alexander Graf, kvm, joro Anthony Liguori wrote: >> That's a really good idea. We'd only have to check the MSR value on >> EFER set, to know if setting SVME is allowed. That would allow for a >> really easy control over the SVM feature on a per-guest basis and >> from userspace. > > And with Gleb's BIOS configuration changes that are now upstream, it's > really easy to communicate to the BIOS whether SVM should be enabled. We need the ability to control this on the qemu level rather than guest bios level. For example, once you enable the cpuid bit, then, strictly speaking, you lose the ability to live migrate across vendors. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-11-02 9:11 ` Avi Kivity @ 2008-11-03 7:37 ` Alexander Graf 2008-11-03 13:54 ` Avi Kivity 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-11-03 7:37 UTC (permalink / raw) To: Avi Kivity; +Cc: Anthony Liguori, kvm@vger.kernel.org, joro@8bytes.org Am 02.11.2008 um 10:11 schrieb Avi Kivity <avi@redhat.com>: > Anthony Liguori wrote: >>> That's a really good idea. We'd only have to check the MSR value >>> on EFER set, to know if setting SVME is allowed. That would allow >>> for a really easy control over the SVM feature on a per-guest >>> basis and from userspace. >> >> And with Gleb's BIOS configuration changes that are now upstream, >> it's really easy to communicate to the BIOS whether SVM should be >> enabled. > > We need the ability to control this on the qemu level rather than > guest bios level. For example, once you enable the cpuid bit, then, > strictly speaking, you lose the ability to live migrate across > vendors. Hum, so we could patch the bios to always try and activate svm when the cpuid bit is found and disable it (msr write) when not. Then we could simply enable/disable the cpuid capability in qemu if we don't want it exposed. > > > -- > error compiling committee.c: too many arguments to function > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 9/9] Allow setting the SVME bit v5 2008-11-03 7:37 ` Alexander Graf @ 2008-11-03 13:54 ` Avi Kivity 0 siblings, 0 replies; 53+ messages in thread From: Avi Kivity @ 2008-11-03 13:54 UTC (permalink / raw) To: Alexander Graf; +Cc: Anthony Liguori, kvm@vger.kernel.org, joro@8bytes.org Alexander Graf wrote: >> We need the ability to control this on the qemu level rather than >> guest bios level. For example, once you enable the cpuid bit, then, >> strictly speaking, you lose the ability to live migrate across vendors. > > Hum, so we could patch the bios to always try and activate svm when > the cpuid bit is found and disable it (msr write) when not. It needs to do this anyway in case it finds itself running on an Intel cpu or a pre-nested-svm kernel. > Then we could simply enable/disable the cpuid capability in qemu if we > don't want it exposed. Yes. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 8/9] allow read access to MSR_VM_VR 2008-10-20 17:04 ` [PATCH 8/9] allow read access to MSR_VM_VR Alexander Graf 2008-10-20 17:04 ` [PATCH 9/9] Allow setting the SVME bit v5 Alexander Graf @ 2008-10-30 19:15 ` Anthony Liguori 2008-10-31 10:57 ` Joerg Roedel 1 sibling, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:15 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi, Joerg Roedel Alexander Graf wrote: > From: Joerg Roedel <joerg.roedel@amd.com> > > KVM tries to read the VM_CR MSR to find out if SVM was disabled by > the BIOS. So implement read support for this MSR to make nested > SVM running. > > Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 3 +++ > 1 files changed, 3 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 720b610..4582699 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -1869,6 +1869,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) > case MSR_VM_HSAVE_PA: > *data = 0; > break; > + case MSR_VM_CR: > + *data = 0; > + break; > It would be useful to support setting this value too. Then we could disable SVM support by writing the MSR in the BIOS. Regards, Anthony Liguori > default: > return kvm_get_msr_common(vcpu, ecx, data); > } > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 8/9] allow read access to MSR_VM_VR 2008-10-30 19:15 ` [PATCH 8/9] allow read access to MSR_VM_VR Anthony Liguori @ 2008-10-31 10:57 ` Joerg Roedel 0 siblings, 0 replies; 53+ messages in thread From: Joerg Roedel @ 2008-10-31 10:57 UTC (permalink / raw) To: Anthony Liguori; +Cc: Alexander Graf, kvm, avi, Joerg Roedel On Thu, Oct 30, 2008 at 02:15:23PM -0500, Anthony Liguori wrote: > Alexander Graf wrote: > >From: Joerg Roedel <joerg.roedel@amd.com> > > > >KVM tries to read the VM_CR MSR to find out if SVM was disabled by > >the BIOS. So implement read support for this MSR to make nested > >SVM running. > > > >Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> > >Signed-off-by: Alexander Graf <agraf@suse.de> > >--- > > arch/x86/kvm/svm.c | 3 +++ > > 1 files changed, 3 insertions(+), 0 deletions(-) > > > >diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > >index 720b610..4582699 100644 > >--- a/arch/x86/kvm/svm.c > >+++ b/arch/x86/kvm/svm.c > >@@ -1869,6 +1869,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, > >unsigned ecx, u64 *data) > > case MSR_VM_HSAVE_PA: > > *data = 0; > > break; > >+ case MSR_VM_CR: > >+ *data = 0; > >+ break; > > > > It would be useful to support setting this value too. Then we could > disable SVM support by writing the MSR in the BIOS. Things are a bit more complicated here. On real hardware you can't disable SVM by writing so VM_CR. You have to implement the SVM_KEY MSR for that. But that should not be too difficult. Joerg ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: Add VMRUN handler v5 2008-10-20 17:04 ` [PATCH 6/9] Add VMRUN handler v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 7/9] Add VMEXIT handler and intercepts v5 Alexander Graf @ 2008-10-28 18:38 ` Mike Day 2008-10-29 5:35 ` Alexander Graf 2008-11-18 14:14 ` [PATCH 6/9] " Muli Ben-Yehuda 2 siblings, 1 reply; 53+ messages in thread From: Mike Day @ 2008-10-28 18:38 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, anthony, avi On 20/10/08 19:04 +0200, Alexander Graf wrote: > +static int vmrun_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > +{ > + nsvm_printk("VMrun\n"); > + if (nested_svm_check_permissions(svm)) > + return 1; > + > + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; > + skip_emulated_instruction(&svm->vcpu); > + > + if (nested_svm_do(svm, svm->vmcb->save.rax, 0, > + NULL, nested_svm_vmrun)) > + return 1; > + > + if (nested_svm_do(svm, svm->vmcb->control.msrpm_base_pa, 0, > + NULL, nested_svm_vmrun_msrpm)) > + return 1; > + > + return 1; > +} A nitpick, but you could remove the last if() statement and one of the last two return statements. Unless you forsee more calls to nested_svm_do() in here. Mike -- Mike Day http://www.ncultra.org AIM: ncmikeday | Yahoo IM: ultra.runner PGP key: http://www.ncultra.org/ncmike/pubkey.asc ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: Add VMRUN handler v5 2008-10-28 18:38 ` Add VMRUN handler v5 Mike Day @ 2008-10-29 5:35 ` Alexander Graf 0 siblings, 0 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-29 5:35 UTC (permalink / raw) To: ncmike; +Cc: kvm, joro, anthony, avi On 28.10.2008, at 19:38, Mike Day wrote: > On 20/10/08 19:04 +0200, Alexander Graf wrote: > >> +static int vmrun_interception(struct vcpu_svm *svm, struct kvm_run >> *kvm_run) >> +{ >> + nsvm_printk("VMrun\n"); >> + if (nested_svm_check_permissions(svm)) >> + return 1; >> + >> + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; >> + skip_emulated_instruction(&svm->vcpu); >> + >> + if (nested_svm_do(svm, svm->vmcb->save.rax, 0, >> + NULL, nested_svm_vmrun)) >> + return 1; >> + >> + if (nested_svm_do(svm, svm->vmcb->control.msrpm_base_pa, 0, >> + NULL, nested_svm_vmrun_msrpm)) >> + return 1; >> + >> + return 1; >> +} > > A nitpick, but you could remove the last if() statement and one of > the last two return statements. Unless you forsee more calls to > nested_svm_do() in here. I had the IOPM merger in here and actually like the fall-through aspect of the function :-). But I guess this again is a personal taste thing. Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 6/9] Add VMRUN handler v5 2008-10-20 17:04 ` [PATCH 6/9] Add VMRUN handler v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 7/9] Add VMEXIT handler and intercepts v5 Alexander Graf 2008-10-28 18:38 ` Add VMRUN handler v5 Mike Day @ 2008-11-18 14:14 ` Muli Ben-Yehuda 2 siblings, 0 replies; 53+ messages in thread From: Muli Ben-Yehuda @ 2008-11-18 14:14 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, anthony, avi On Mon, Oct 20, 2008 at 07:04:47PM +0200, Alexander Graf wrote: > +static int nested_svm_vmrun(struct vcpu_svm *svm, void *arg1, > + void *arg2, void *opaque) > +{ > + struct vmcb *nested_vmcb = (struct vmcb *)arg1; > + struct vmcb *hsave = svm->hsave; > + > + /* nested_vmcb is our indicator if nested SVM is activated */ > + svm->nested_vmcb = svm->vmcb->save.rax; > + > + /* Clear internal status */ > + svm->vcpu.arch.exception.pending = false; > + > + /* Save the old vmcb, so we don't need to pick what we save, but > + can restore everything when a VMEXIT occurs */ > + memcpy(hsave, svm->vmcb, sizeof(struct vmcb)); > + /* We need to remember the original CR3 in the SPT case */ > + if (!npt_enabled) > + hsave->save.cr3 = svm->vcpu.arch.cr3; > + hsave->save.rip = svm->next_rip; > + > + if (svm->vmcb->save.rflags & X86_EFLAGS_IF) > + svm->vcpu.arch.hflags |= HF_HIF_MASK; > + else > + svm->vcpu.arch.hflags &= ~HF_HIF_MASK; > + > + /* Load the nested guest state */ > + svm->vmcb->save.es = nested_vmcb->save.es; > + svm->vmcb->save.cs = nested_vmcb->save.cs; > + svm->vmcb->save.ss = nested_vmcb->save.ss; > + svm->vmcb->save.ds = nested_vmcb->save.ds; > + svm->vmcb->save.gdtr = nested_vmcb->save.gdtr; > + svm->vmcb->save.idtr = nested_vmcb->save.idtr; > + svm->vmcb->save.rflags = nested_vmcb->save.rflags; > + svm_set_efer(&svm->vcpu, nested_vmcb->save.efer); > + svm_set_cr0(&svm->vcpu, nested_vmcb->save.cr0); > + svm_set_cr4(&svm->vcpu, nested_vmcb->save.cr4); > + if (npt_enabled) { > + svm->vmcb->save.cr3 = nested_vmcb->save.cr3; > + svm->vcpu.arch.cr3 = nested_vmcb->save.cr3; > + } else { > + kvm_set_cr3(&svm->vcpu, nested_vmcb->save.cr3); > + kvm_mmu_reset_context(&svm->vcpu); > + } > + svm->vmcb->save.cr2 = nested_vmcb->save.cr2; > + kvm_register_write(&svm->vcpu, VCPU_REGS_RAX, nested_vmcb->save.rax); > + kvm_register_write(&svm->vcpu, VCPU_REGS_RSP, nested_vmcb->save.rsp); > + kvm_register_write(&svm->vcpu, VCPU_REGS_RIP, nested_vmcb->save.rip); > + /* In case we don't even reach vcpu_run, the fields are not updated */ > + svm->vmcb->save.rax = nested_vmcb->save.rax; > + svm->vmcb->save.rsp = nested_vmcb->save.rsp; > + svm->vmcb->save.rip = nested_vmcb->save.rip; > + svm->vmcb->save.dr7 = nested_vmcb->save.dr7; > + svm->vmcb->save.dr6 = nested_vmcb->save.dr6; > + svm->vmcb->save.cpl = nested_vmcb->save.cpl; > + > + /* We don't want a nested guest to be more powerful than the guest, > + so all intercepts are ORed */ > + svm->vmcb->control.intercept_cr_read |= > + nested_vmcb->control.intercept_cr_read; > + svm->vmcb->control.intercept_cr_write |= > + nested_vmcb->control.intercept_cr_write; > + svm->vmcb->control.intercept_dr_read |= > + nested_vmcb->control.intercept_dr_read; > + svm->vmcb->control.intercept_dr_write |= > + nested_vmcb->control.intercept_dr_write; > + svm->vmcb->control.intercept_exceptions |= > + nested_vmcb->control.intercept_exceptions; > + > + svm->vmcb->control.intercept |= nested_vmcb->control.intercept; > + > + svm->nested_vmcb_msrpm = nested_vmcb->control.msrpm_base_pa; > + > + force_new_asid(&svm->vcpu); > + svm->vmcb->control.exit_int_info = nested_vmcb->control.exit_int_info; > + svm->vmcb->control.exit_int_info_err = nested_vmcb->control.exit_int_info_err; > + svm->vmcb->control.int_ctl = nested_vmcb->control.int_ctl | V_INTR_MASKING_MASK; > + if (nested_vmcb->control.int_ctl & V_IRQ_MASK) { > + nsvm_printk("nSVM Injecting Interrupt: 0x%x\n", > + nested_vmcb->control.int_ctl); > + } > + if (nested_vmcb->control.int_ctl & V_INTR_MASKING_MASK) > + svm->vcpu.arch.hflags |= HF_VINTR_MASK; > + else > + svm->vcpu.arch.hflags &= ~HF_VINTR_MASK; > + > + nsvm_printk("nSVM exit_int_info: 0x%x | int_state: 0x%x\n", > + nested_vmcb->control.exit_int_info, > + nested_vmcb->control.int_state); > + > + svm->vmcb->control.int_vector = nested_vmcb->control.int_vector; > + svm->vmcb->control.int_state = nested_vmcb->control.int_state; > + svm->vmcb->control.tsc_offset += nested_vmcb->control.tsc_offset; > + if (nested_vmcb->control.event_inj & SVM_EVTINJ_VALID) > + nsvm_printk("Injecting Event: 0x%x\n", > + nested_vmcb->control.event_inj); > + svm->vmcb->control.event_inj = nested_vmcb->control.event_inj; > + svm->vmcb->control.event_inj_err = nested_vmcb->control.event_inj_err; > + > + svm->vcpu.arch.hflags |= HF_GIF_MASK; > + > + return 1; I think there's a small bug here... > +static int vmrun_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > +{ > + nsvm_printk("VMrun\n"); > + if (nested_svm_check_permissions(svm)) > + return 1; > + > + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; > + skip_emulated_instruction(&svm->vcpu); > + > + if (nested_svm_do(svm, svm->vmcb->save.rax, 0, > + NULL, nested_svm_vmrun)) > + return 1; > + ... which manifests here. nested_svm_run always returns 1, which will cause us to return here rather than sync the msrpm's. Cheers, Muli -- The First Workshop on I/O Virtualization (WIOV '08) Dec 2008, San Diego, CA, http://www.usenix.org/wiov08/ <-> SYSTOR 2009---The Israeli Experimental Systems Conference http://www.haifa.il.ibm.com/conferences/systor2009/ ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 2008-10-20 17:04 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 6/9] Add VMRUN handler v5 Alexander Graf @ 2008-10-30 18:06 ` Anthony Liguori 2008-10-30 18:44 ` Alexander Graf 1 sibling, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 18:06 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > This implements the VMLOAD and VMSAVE instructions, that usually surround > the VMRUN instructions. Both instructions load / restore the same elements, > so we only need to implement them once. > > v2 fixes CPL checking and replaces memcpy by assignments > v3 makes use of the new permission checking > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++- > 1 files changed, 58 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index eb301fe..10ad02b 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -1242,6 +1242,62 @@ static int nested_svm_do(struct vcpu_svm *svm, > return retval; > } > > +static int nested_svm_vmloadsave(struct vmcb *from_vmcb, struct vmcb *to_vmcb) > +{ > + to_vmcb->save.fs = from_vmcb->save.fs; > + to_vmcb->save.gs = from_vmcb->save.gs; > + to_vmcb->save.tr = from_vmcb->save.tr; > + to_vmcb->save.ldtr = from_vmcb->save.ldtr; > + to_vmcb->save.kernel_gs_base = from_vmcb->save.kernel_gs_base; > + to_vmcb->save.star = from_vmcb->save.star; > + to_vmcb->save.lstar = from_vmcb->save.lstar; > + to_vmcb->save.cstar = from_vmcb->save.cstar; > + to_vmcb->save.sfmask = from_vmcb->save.sfmask; > + to_vmcb->save.sysenter_cs = from_vmcb->save.sysenter_cs; > + to_vmcb->save.sysenter_esp = from_vmcb->save.sysenter_esp; > + to_vmcb->save.sysenter_eip = from_vmcb->save.sysenter_eip; > + > + return 1; > +} > + > +static int nested_svm_vmload(struct vcpu_svm *svm, void *nested_vmcb, > + void *arg2, void *opaque) > +{ > + return nested_svm_vmloadsave((struct vmcb *)nested_vmcb, svm->vmcb); > +} > + > +static int nested_svm_vmsave(struct vcpu_svm *svm, void *nested_vmcb, > + void *arg2, void *opaque) > +{ > + return nested_svm_vmloadsave(svm->vmcb, (struct vmcb *)nested_vmcb); > +} > + > +static int vmload_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > +{ > + if (nested_svm_check_permissions(svm)) > + return 1; > + > + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; > + skip_emulated_instruction(&svm->vcpu); > + > + nested_svm_do(svm, svm->vmcb->save.rax, 0, NULL, nested_svm_vmload); > The whole trampoline thing seems awkward to me. I think it would be more reasonable to just open code this routine and use helper functions when appropriate. Regards, Anthony Liguori ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 2008-10-30 18:06 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Anthony Liguori @ 2008-10-30 18:44 ` Alexander Graf 2008-10-30 19:14 ` Anthony Liguori 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-30 18:44 UTC (permalink / raw) To: Anthony Liguori; +Cc: kvm, joro, avi On 30.10.2008, at 19:06, Anthony Liguori wrote: > Alexander Graf wrote: >> This implements the VMLOAD and VMSAVE instructions, that usually >> surround >> the VMRUN instructions. Both instructions load / restore the same >> elements, >> so we only need to implement them once. >> >> v2 fixes CPL checking and replaces memcpy by assignments >> v3 makes use of the new permission checking >> snip >> >> +static int vmload_interception(struct vcpu_svm *svm, struct >> kvm_run *kvm_run) >> +{ >> + if (nested_svm_check_permissions(svm)) >> + return 1; >> + >> + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; >> + skip_emulated_instruction(&svm->vcpu); >> + >> + nested_svm_do(svm, svm->vmcb->save.rax, 0, NULL, >> nested_svm_vmload); >> > > The whole trampoline thing seems awkward to me. I think it would be > more reasonable to just open code this routine and use helper > functions when appropriate. What exactly do you mean? Do you prefer to duplicate the guest-page- mapping code several times? Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 2008-10-30 18:44 ` Alexander Graf @ 2008-10-30 19:14 ` Anthony Liguori 2008-10-30 21:02 ` Alexander Graf 0 siblings, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:14 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > > On 30.10.2008, at 19:06, Anthony Liguori wrote: > >> Alexander Graf wrote: >>> This implements the VMLOAD and VMSAVE instructions, that usually >>> surround >>> the VMRUN instructions. Both instructions load / restore the same >>> elements, >>> so we only need to implement them once. >>> >>> v2 fixes CPL checking and replaces memcpy by assignments >>> v3 makes use of the new permission checking >>> > > snip > >>> >>> +static int vmload_interception(struct vcpu_svm *svm, struct kvm_run >>> *kvm_run) >>> +{ >>> + if (nested_svm_check_permissions(svm)) >>> + return 1; >>> + >>> + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; >>> + skip_emulated_instruction(&svm->vcpu); >>> + >>> + nested_svm_do(svm, svm->vmcb->save.rax, 0, NULL, >>> nested_svm_vmload); >>> >> >> The whole trampoline thing seems awkward to me. I think it would be >> more reasonable to just open code this routine and use helper >> functions when appropriate. > > What exactly do you mean? Do you prefer to duplicate the > guest-page-mapping code several times? You do: foo(x, y, cb); where foo is: // something with x // something with y cb(); // something else And then do: foo(1, 2, bar); An alternative module would be: foo_helper1(x) // something with x foo_helper2(y) // something with y foo_cleanup() // something else And then: foo_helper1(1); foo_helper2(2) bar() foo_cleanup(); And since you only sometimes pass both x, y parameters, I do think this later style would result in more readable code. Regards, Anthony Liguori > Alex > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 2008-10-30 19:14 ` Anthony Liguori @ 2008-10-30 21:02 ` Alexander Graf 2008-10-30 21:38 ` Anthony Liguori 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-30 21:02 UTC (permalink / raw) To: Anthony Liguori; +Cc: kvm, joro, avi On 30.10.2008, at 20:14, Anthony Liguori wrote: > Alexander Graf wrote: >> >> On 30.10.2008, at 19:06, Anthony Liguori wrote: >> >>> Alexander Graf wrote: >>>> This implements the VMLOAD and VMSAVE instructions, that usually >>>> surround >>>> the VMRUN instructions. Both instructions load / restore the same >>>> elements, >>>> so we only need to implement them once. >>>> >>>> v2 fixes CPL checking and replaces memcpy by assignments >>>> v3 makes use of the new permission checking >>>> >> >> snip >> >>>> >>>> +static int vmload_interception(struct vcpu_svm *svm, struct >>>> kvm_run *kvm_run) >>>> +{ >>>> + if (nested_svm_check_permissions(svm)) >>>> + return 1; >>>> + >>>> + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; >>>> + skip_emulated_instruction(&svm->vcpu); >>>> + >>>> + nested_svm_do(svm, svm->vmcb->save.rax, 0, NULL, >>>> nested_svm_vmload); >>>> >>> >>> The whole trampoline thing seems awkward to me. I think it would >>> be more reasonable to just open code this routine and use helper >>> functions when appropriate. >> >> What exactly do you mean? Do you prefer to duplicate the guest-page- >> mapping code several times? > > You do: > > foo(x, y, cb); > > where foo is: > > // something with x > // something with y > > cb(); > > // something else > > And then do: > > foo(1, 2, bar); > > An alternative module would be: > > foo_helper1(x) > // something with x > > foo_helper2(y) > // something with y > > foo_cleanup() > // something else > > And then: > > foo_helper1(1); > foo_helper2(2) > bar() > foo_cleanup(); > > And since you only sometimes pass both x, y parameters, I do think > this later style would result in more readable code. I don't see this improving code readability, but maybe that again is just personal preference. Is this a real problem, or just again a different esthetic view? Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 2008-10-30 21:02 ` Alexander Graf @ 2008-10-30 21:38 ` Anthony Liguori 0 siblings, 0 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 21:38 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > > On 30.10.2008, at 20:14, Anthony Liguori wrote: > > I don't see this improving code readability, but maybe that again is > just personal preference. Is this a real problem, or just again a > different esthetic view? I'm not wildy opposed to your implementation so I'm perfectly willing to chalk it up to personal preference. Regards, Anthony Liguori > Alex > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-20 17:04 ` [PATCH 4/9] Implement hsave v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Alexander Graf @ 2008-10-30 18:04 ` Anthony Liguori 2008-10-30 18:43 ` Alexander Graf 1 sibling, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 18:04 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > Implement the hsave MSR, that gives the VCPU a GPA to save the > old guest state in. > > v2 allows userspace to save/restore hsave > v4 dummys out the hsave MSR, so we use a host page > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/kvm_svm.h | 1 + > arch/x86/kvm/svm.c | 12 ++++++++++++ > 2 files changed, 13 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/kvm_svm.h b/arch/x86/kvm/kvm_svm.h > index 65ef0fc..40cb128 100644 > --- a/arch/x86/kvm/kvm_svm.h > +++ b/arch/x86/kvm/kvm_svm.h > @@ -41,6 +41,7 @@ struct vcpu_svm { > unsigned long host_dr7; > > u32 *msrpm; > + struct vmcb *hsave; > }; > > #endif > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 62bfa2b..eb301fe 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -640,6 +640,7 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) > struct vcpu_svm *svm; > struct page *page; > struct page *msrpm_pages; > + struct page *hsave_page; > int err; > > svm = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL); > @@ -665,6 +666,11 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id) > svm->msrpm = page_address(msrpm_pages); > svm_vcpu_init_msrpm(svm->msrpm); > > + hsave_page = alloc_page(GFP_KERNEL); > + if (!hsave_page) > + goto uninit; > + svm->hsave = page_address(hsave_page); > + > svm->vmcb = page_address(page); > clear_page(svm->vmcb); > svm->vmcb_pa = page_to_pfn(page) << PAGE_SHIFT; > @@ -694,6 +700,7 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu) > > __free_page(pfn_to_page(svm->vmcb_pa >> PAGE_SHIFT)); > __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); > + __free_page(virt_to_page(svm->hsave)); > kvm_vcpu_uninit(vcpu); > kmem_cache_free(kvm_vcpu_cache, svm); > } > @@ -1376,6 +1383,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) > case MSR_IA32_LASTINTTOIP: > *data = svm->vmcb->save.last_excp_to; > break; > + case MSR_VM_HSAVE_PA: > + *data = 0; > + break; > I'm confused as to why you aren't allowing userspace to set/get this MSR? Wouldn't it be needed for save/restore? Regards, Anthony Liguori > default: > return kvm_get_msr_common(vcpu, ecx, data); > } > @@ -1470,6 +1480,8 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data) > pr_unimpl(vcpu, "unimplemented perfctr wrmsr: 0x%x data 0x%llx\n", ecx, data); > > break; > + case MSR_VM_HSAVE_PA: > + break; > default: > return kvm_set_msr_common(vcpu, ecx, data); > } > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 18:04 ` [PATCH 4/9] Implement hsave v5 Anthony Liguori @ 2008-10-30 18:43 ` Alexander Graf 2008-10-30 19:05 ` Anthony Liguori 2008-10-30 19:29 ` Avi Kivity 0 siblings, 2 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-30 18:43 UTC (permalink / raw) To: Anthony Liguori; +Cc: kvm, joro, avi On 30.10.2008, at 19:04, Anthony Liguori wrote: > Alexander Graf wrote: >> Implement the hsave MSR, that gives the VCPU a GPA to save the >> old guest state in. >> >> v2 allows userspace to save/restore hsave >> v4 dummys out the hsave MSR, so we use a host page >> >> snip >> >> kmem_cache_free(kvm_vcpu_cache, svm); >> } >> @@ -1376,6 +1383,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, >> unsigned ecx, u64 *data) >> case MSR_IA32_LASTINTTOIP: >> *data = svm->vmcb->save.last_excp_to; >> break; >> + case MSR_VM_HSAVE_PA: >> + *data = 0; >> + break; >> > > I'm confused as to why you aren't allowing userspace to set/get this > MSR? Wouldn't it be needed for save/restore? I don't see any benefit from actually exporting that MSR atm. KVM does never read it and it'd just add additional overhead on every userspace transition, as it'd need to be synced every time. Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 18:43 ` Alexander Graf @ 2008-10-30 19:05 ` Anthony Liguori 2008-10-30 19:29 ` Avi Kivity 1 sibling, 0 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:05 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > > On 30.10.2008, at 19:04, Anthony Liguori wrote: > > >>> >>> kmem_cache_free(kvm_vcpu_cache, svm); >>> } >>> @@ -1376,6 +1383,9 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, >>> unsigned ecx, u64 *data) >>> case MSR_IA32_LASTINTTOIP: >>> *data = svm->vmcb->save.last_excp_to; >>> break; >>> + case MSR_VM_HSAVE_PA: >>> + *data = 0; >>> + break; >>> >> >> I'm confused as to why you aren't allowing userspace to set/get this >> MSR? Wouldn't it be needed for save/restore? > > I don't see any benefit from actually exporting that MSR atm. KVM does > never read it and it'd just add additional overhead on every userspace > transition, as it'd need to be synced every time. Save and restore? Regards, Anthony Liguori > Alex > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 18:43 ` Alexander Graf 2008-10-30 19:05 ` Anthony Liguori @ 2008-10-30 19:29 ` Avi Kivity 2008-10-30 20:38 ` Alexander Graf 1 sibling, 1 reply; 53+ messages in thread From: Avi Kivity @ 2008-10-30 19:29 UTC (permalink / raw) To: Alexander Graf; +Cc: Anthony Liguori, kvm, joro Alexander Graf wrote: > > I don't see any benefit from actually exporting that MSR atm. KVM does > never read it and it'd just add additional overhead on every userspace > transition, as it'd need to be synced every time. We only sync MSRs on save/restore. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 19:29 ` Avi Kivity @ 2008-10-30 20:38 ` Alexander Graf 2008-10-30 20:44 ` Anthony Liguori 2008-11-02 9:24 ` Avi Kivity 0 siblings, 2 replies; 53+ messages in thread From: Alexander Graf @ 2008-10-30 20:38 UTC (permalink / raw) To: Avi Kivity; +Cc: Anthony Liguori, kvm, joro On 30.10.2008, at 20:29, Avi Kivity wrote: > Alexander Graf wrote: >> >> I don't see any benefit from actually exporting that MSR atm. KVM >> does never read it and it'd just add additional overhead on every >> userspace transition, as it'd need to be synced every time. > > We only sync MSRs on save/restore. Oh, really? Looks like I interpreted that code wrong :-). Nevertheless, we don't use the hsave given by the guest OS, because we can't trust it. So saving and restoring a non-used value sounds pretty useless - unless you're migrating from KVM to Qemu. Is this kind of migration (going to be) possible? Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 20:38 ` Alexander Graf @ 2008-10-30 20:44 ` Anthony Liguori 2008-10-30 20:47 ` Alexander Graf 2008-11-02 9:24 ` Avi Kivity 1 sibling, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 20:44 UTC (permalink / raw) To: Alexander Graf; +Cc: Avi Kivity, kvm, joro Alexander Graf wrote: > > On 30.10.2008, at 20:29, Avi Kivity wrote: > >> Alexander Graf wrote: >>> >>> I don't see any benefit from actually exporting that MSR atm. KVM >>> does never read it and it'd just add additional overhead on every >>> userspace transition, as it'd need to be synced every time. >> >> We only sync MSRs on save/restore. > > Oh, really? Looks like I interpreted that code wrong :-). > Nevertheless, we don't use the hsave given by the guest OS, because we > can't trust it. So saving and restoring a non-used value sounds pretty > useless - unless you're migrating from KVM to Qemu. Is this kind of > migration (going to be) possible? But what if the guest tries to reread the hsave value it wrote? Couldn't that break a guest? Regards, Anthony Liguori > Alex > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 20:44 ` Anthony Liguori @ 2008-10-30 20:47 ` Alexander Graf 2008-10-30 22:05 ` Anthony Liguori 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-30 20:47 UTC (permalink / raw) To: Anthony Liguori; +Cc: Avi Kivity, kvm, joro On 30.10.2008, at 21:44, Anthony Liguori wrote: > Alexander Graf wrote: >> >> On 30.10.2008, at 20:29, Avi Kivity wrote: >> >>> Alexander Graf wrote: >>>> >>>> I don't see any benefit from actually exporting that MSR atm. KVM >>>> does never read it and it'd just add additional overhead on every >>>> userspace transition, as it'd need to be synced every time. >>> >>> We only sync MSRs on save/restore. >> >> Oh, really? Looks like I interpreted that code wrong :-). >> Nevertheless, we don't use the hsave given by the guest OS, because >> we can't trust it. So saving and restoring a non-used value sounds >> pretty useless - unless you're migrating from KVM to Qemu. Is this >> kind of migration (going to be) possible? > > But what if the guest tries to reread the hsave value it wrote? > Couldn't that break a guest? Yes, it could. I just haven't seen any VMM do that yet. But I agree - it could be implemented, I just didn't deem it necessary for now. Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 20:47 ` Alexander Graf @ 2008-10-30 22:05 ` Anthony Liguori 0 siblings, 0 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 22:05 UTC (permalink / raw) To: Alexander Graf; +Cc: Avi Kivity, kvm, joro Alexander Graf wrote: > > On 30.10.2008, at 21:44, Anthony Liguori wrote: > >> Alexander Graf wrote: >>> >>> On 30.10.2008, at 20:29, Avi Kivity wrote: >>> >>>> Alexander Graf wrote: >>>>> >>>>> I don't see any benefit from actually exporting that MSR atm. KVM >>>>> does never read it and it'd just add additional overhead on every >>>>> userspace transition, as it'd need to be synced every time. >>>> >>>> We only sync MSRs on save/restore. >>> >>> Oh, really? Looks like I interpreted that code wrong :-). >>> Nevertheless, we don't use the hsave given by the guest OS, because >>> we can't trust it. So saving and restoring a non-used value sounds >>> pretty useless - unless you're migrating from KVM to Qemu. Is this >>> kind of migration (going to be) possible? >> >> But what if the guest tries to reread the hsave value it wrote? >> Couldn't that break a guest? > > Yes, it could. I just haven't seen any VMM do that yet. But I agree - > it could be implemented, I just didn't deem it necessary for now. It's always easier to do it right the first time then to wait until someone has to debug a weird issue :-) Regards, Anthony Liguori > Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 4/9] Implement hsave v5 2008-10-30 20:38 ` Alexander Graf 2008-10-30 20:44 ` Anthony Liguori @ 2008-11-02 9:24 ` Avi Kivity 1 sibling, 0 replies; 53+ messages in thread From: Avi Kivity @ 2008-11-02 9:24 UTC (permalink / raw) To: Alexander Graf; +Cc: Anthony Liguori, kvm, joro Alexander Graf wrote: > > Oh, really? Looks like I interpreted that code wrong :-). > Nevertheless, we don't use the hsave given by the guest OS, because we > can't trust it. So saving and restoring a non-used value sounds pretty > useless - unless you're migrating from KVM to Qemu. Is this kind of > migration (going to be) possible? Well, at least for testing purposes, it sounds useful. We also can't rule out a guest reading the msr and breaking if the wrong value is found. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: Implement GIF, clgi and stgi v5 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 4/9] Implement hsave v5 Alexander Graf @ 2008-10-27 19:09 ` Mike Day 2008-10-27 19:29 ` Avi Kivity 2008-10-30 18:02 ` [PATCH 3/9] " Anthony Liguori 2 siblings, 1 reply; 53+ messages in thread From: Mike Day @ 2008-10-27 19:09 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, anthony, avi On 20/10/08 19:04 +0200, Alexander Graf wrote: > include/asm-x86/kvm_host.h | 3 +++ This file is not present in my clone of Avi's tree, so the patch doesn't apply. Am I missing a step? thanks, Mike -- Mike Day http://www.ncultra.org AIM: ncmikeday | Yahoo IM: ultra.runner PGP key: http://www.ncultra.org/ncmike/pubkey.asc ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: Implement GIF, clgi and stgi v5 2008-10-27 19:09 ` Implement GIF, clgi and stgi v5 Mike Day @ 2008-10-27 19:29 ` Avi Kivity 2008-10-27 19:40 ` Mike Day 0 siblings, 1 reply; 53+ messages in thread From: Avi Kivity @ 2008-10-27 19:29 UTC (permalink / raw) To: ncmike; +Cc: Alexander Graf, kvm, joro, anthony Mike Day wrote: > On 20/10/08 19:04 +0200, Alexander Graf wrote: > > >> include/asm-x86/kvm_host.h | 3 +++ >> > > This file is not present in my clone of Avi's tree, so the patch > doesn't apply. > > Am I missing a step? > Mainline recently renamed include/asm-x86 to arch/x86/include/asm. Either edit the patches to reflect this, or try 'git am -3' (not sure if the latter will work). -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: Implement GIF, clgi and stgi v5 2008-10-27 19:29 ` Avi Kivity @ 2008-10-27 19:40 ` Mike Day 0 siblings, 0 replies; 53+ messages in thread From: Mike Day @ 2008-10-27 19:40 UTC (permalink / raw) To: Avi Kivity; +Cc: Alexander Graf, kvm, joro, anthony On 27/10/08 21:29 +0200, Avi Kivity wrote: > > Mainline recently renamed include/asm-x86 to arch/x86/include/asm. > Either edit the patches to reflect this, or try 'git am -3' (not sure > if the latter will work). Thanks! editing the patches works fine. Mike -- Mike Day http://www.ncultra.org AIM: ncmikeday | Yahoo IM: ultra.runner PGP key: http://www.ncultra.org/ncmike/pubkey.asc ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 3/9] Implement GIF, clgi and stgi v5 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 4/9] Implement hsave v5 Alexander Graf 2008-10-27 19:09 ` Implement GIF, clgi and stgi v5 Mike Day @ 2008-10-30 18:02 ` Anthony Liguori 2008-10-30 18:10 ` Avi Kivity 2 siblings, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 18:02 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > This patch implements the GIF flag and the clgi and stgi instructions that > set this flag. Only if the flag is set (default), interrupts can be received by > the CPU. > > To keep the information about that somewhere, this patch adds a new hidden > flags vector. that is used to store information that does not go into the > vmcb, but is SVM specific. > > v2 moves the hflags to x86 generic code > v3 makes use of the new permission helper > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 42 +++++++++++++++++++++++++++++++++++++++--- > include/asm-x86/kvm_host.h | 3 +++ > 2 files changed, 42 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index a00421b..62bfa2b 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -614,6 +614,8 @@ static void init_vmcb(struct vcpu_svm *svm) > save->cr4 = 0; > } > force_new_asid(&svm->vcpu); > + > + svm->vcpu.arch.hflags = HF_GIF_MASK; > } > > static int svm_vcpu_reset(struct kvm_vcpu *vcpu) > @@ -1233,6 +1235,36 @@ static int nested_svm_do(struct vcpu_svm *svm, > return retval; > } > > +static int stgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > +{ > + if (nested_svm_check_permissions(svm)) > + return 1; > + > + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; > + skip_emulated_instruction(&svm->vcpu); > + > + svm->vcpu.arch.hflags |= HF_GIF_MASK; > + > + return 1; > +} > It feels a little strange to see this here instead of an implementation of stgi/clgi in x86_emulate. Any reason for not going that route? GIF somehow needs exposure to userspace too, right? Otherwise, when using -no-kernel-apic, userspace may try to inject an interrupt when the guest cannot handle it, right? Regards, Anthony Liguori ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 3/9] Implement GIF, clgi and stgi v5 2008-10-30 18:02 ` [PATCH 3/9] " Anthony Liguori @ 2008-10-30 18:10 ` Avi Kivity 2008-10-30 18:35 ` Alexander Graf 2008-10-30 19:11 ` Anthony Liguori 0 siblings, 2 replies; 53+ messages in thread From: Avi Kivity @ 2008-10-30 18:10 UTC (permalink / raw) To: Anthony Liguori; +Cc: Alexander Graf, kvm, joro Anthony Liguori wrote: >> >> +static int stgi_interception(struct vcpu_svm *svm, struct kvm_run >> *kvm_run) >> +{ > > It feels a little strange to see this here instead of an > implementation of stgi/clgi in x86_emulate. Any reason for not going > that route? > We already know the instruction is stgi, no need to go through the guest page tables to fetch it. We do the same thing for all instructions for which we have the length and all the information necessary to execute it. > GIF somehow needs exposure to userspace too, right? Otherwise, when > using -no-kernel-apic, userspace may try to inject an interrupt when > the guest cannot handle it, right? Hmm, right, it needs to close the interrupt window. -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 3/9] Implement GIF, clgi and stgi v5 2008-10-30 18:10 ` Avi Kivity @ 2008-10-30 18:35 ` Alexander Graf 2008-10-30 19:08 ` Anthony Liguori 2008-10-30 19:11 ` Anthony Liguori 1 sibling, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-30 18:35 UTC (permalink / raw) To: Avi Kivity; +Cc: Anthony Liguori, kvm, joro On 30.10.2008, at 19:10, Avi Kivity wrote: > Anthony Liguori wrote: >>> +static int stgi_interception(struct vcpu_svm *svm, struct kvm_run >>> *kvm_run) >>> +{ >> >> It feels a little strange to see this here instead of an >> implementation of stgi/clgi in x86_emulate. Any reason for not >> going that route? >> > > We already know the instruction is stgi, no need to go through the > guest page tables to fetch it. We do the same thing for all > instructions for which we have the length and all the information > necessary to execute it. > >> GIF somehow needs exposure to userspace too, right? Otherwise, >> when using -no-kernel-apic, userspace may try to inject an >> interrupt when the guest cannot handle it, right? > > Hmm, right, it needs to close the interrupt window. Yes, it's broken with -no-kernel-apic, since the userspace doesn't know the window is closed. Any good suggestions here? Joerg and me decided to just ignore the non-kernel-apic case, but if it's important we might want to sync the hflags with userspace, so we can export the GIF. Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 3/9] Implement GIF, clgi and stgi v5 2008-10-30 18:35 ` Alexander Graf @ 2008-10-30 19:08 ` Anthony Liguori 0 siblings, 0 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:08 UTC (permalink / raw) To: Alexander Graf; +Cc: Avi Kivity, kvm, joro Alexander Graf wrote: > > On 30.10.2008, at 19:10, Avi Kivity wrote: >>> GIF somehow needs exposure to userspace too, right? Otherwise, when >>> using -no-kernel-apic, userspace may try to inject an interrupt when >>> the guest cannot handle it, right? >> >> Hmm, right, it needs to close the interrupt window. > > Yes, it's broken with -no-kernel-apic, since the userspace doesn't > know the window is closed. Any good suggestions here? Joerg and me > decided to just ignore the non-kernel-apic case, but if it's important > we might want to sync the hflags with userspace, so we can export the > GIF. Maybe abusing kvm_run->if_flag? Current, userspace treats if_flag as a boolean. If you stored GIF and eflags.IF in there, I believe old userspaces would just work. Regards, Anthony Liguori > Alex > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 3/9] Implement GIF, clgi and stgi v5 2008-10-30 18:10 ` Avi Kivity 2008-10-30 18:35 ` Alexander Graf @ 2008-10-30 19:11 ` Anthony Liguori 1 sibling, 0 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:11 UTC (permalink / raw) To: Avi Kivity; +Cc: Alexander Graf, kvm, joro Avi Kivity wrote: > Anthony Liguori wrote: >>> >>> +static int stgi_interception(struct vcpu_svm *svm, struct kvm_run >>> *kvm_run) >>> +{ >> >> It feels a little strange to see this here instead of an >> implementation of stgi/clgi in x86_emulate. Any reason for not going >> that route? >> > > We already know the instruction is stgi, no need to go through the > guest page tables to fetch it. We do the same thing for all > instructions for which we have the length and all the information > necessary to execute it. Fair enough. Regards, Anthony Liguori >> GIF somehow needs exposure to userspace too, right? Otherwise, when >> using -no-kernel-apic, userspace may try to inject an interrupt when >> the guest cannot handle it, right? > > Hmm, right, it needs to close the interrupt window. > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 2/9] Add helper functions for nested SVM v5 2008-10-20 17:04 ` [PATCH 2/9] Add helper functions for nested SVM v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf @ 2008-10-29 13:48 ` Joerg Roedel 2008-10-30 17:56 ` Anthony Liguori 2 siblings, 0 replies; 53+ messages in thread From: Joerg Roedel @ 2008-10-29 13:48 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, anthony, avi On Mon, Oct 20, 2008 at 07:04:43PM +0200, Alexander Graf wrote: > These are helpers for the nested SVM implementation. > > - nsvm_printk implements a debug printk variant > - nested_svm_do calls a handler that can accesses gpa-based memory > > v3 makes use of the new permission checker > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 files changed, 88 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 4ee5376..a00421b 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -48,6 +48,16 @@ MODULE_LICENSE("GPL"); > > #define DEBUGCTL_RESERVED_BITS (~(0x3fULL)) > > +/* Turn on to get debugging output*/ > +/* #define NESTED_DEBUG */ > + > +#ifdef NESTED_DEBUG > +#define nsvm_printk(fmt, args...) printk(KERN_INFO fmt, ## args) > +#else > +static inline void nsvm_printk(char *fmt, ...) { > +} > +#endif > + > /* enable NPT for AMD64 and X86 with PAE */ > #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) > static bool npt_enabled = true; > @@ -1145,6 +1155,84 @@ static int vmmcall_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > return 1; > } > > +static int nested_svm_check_permissions(struct vcpu_svm *svm) > +{ > + if (svm->vmcb->save.cpl) { > + printk(KERN_ERR "%s: invalid cpl 0x%x at ip 0x%lx\n", > + __func__, svm->vmcb->save.cpl, kvm_rip_read(&svm->vcpu)); This is not really a kernel error. Please remove this printk. It allows the guest to flood the logfiles of the host. > + kvm_queue_exception(&svm->vcpu, GP_VECTOR); > + return 1; > + } > + > + if (!(svm->vcpu.arch.shadow_efer & MSR_EFER_SVME_MASK) > + || !is_paging(&svm->vcpu)) { > + kvm_queue_exception(&svm->vcpu, UD_VECTOR); > + return 1; > + } > + > + return 0; > +} > + > +static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 gpa) > +{ > + struct page *page; > + > + down_read(¤t->mm->mmap_sem); > + page = gfn_to_page(svm->vcpu.kvm, gpa >> PAGE_SHIFT); > + up_read(¤t->mm->mmap_sem); > + > + if (is_error_page(page)) { > + printk(KERN_ERR "%s: could not find page at 0x%llx\n", > + __func__, gpa); > + kvm_release_page_clean(page); > + kvm_queue_exception(&svm->vcpu, GP_VECTOR); > + return NULL; > + } > + return page; > +} > + > +static int nested_svm_do(struct vcpu_svm *svm, > + u64 arg1_gpa, u64 arg2_gpa, void *opaque, > + int (*handler)(struct vcpu_svm *svm, > + void *arg1, > + void *arg2, > + void *opaque)) > +{ > + struct page *arg1_page; > + struct page *arg2_page = NULL; > + void *arg1; > + void *arg2 = NULL; > + int retval; > + > + arg1_page = nested_svm_get_page(svm, arg1_gpa); > + if(arg1_page == NULL) > + return 1; > + > + if (arg2_gpa) { > + arg2_page = nested_svm_get_page(svm, arg2_gpa); > + if(arg2_page == NULL) { > + kvm_release_page_clean(arg1_page); > + return 1; > + } > + } > + > + arg1 = kmap_atomic(arg1_page, KM_USER0); > + if (arg2_gpa) > + arg2 = kmap_atomic(arg2_page, KM_USER1); > + > + retval = handler(svm, arg1, arg2, opaque); > + > + kunmap_atomic(arg1, KM_USER0); > + if (arg2_gpa) > + kunmap_atomic(arg2, KM_USER1); > + > + kvm_release_page_dirty(arg1_page); > + if (arg2_gpa) > + kvm_release_page_dirty(arg2_page); > + > + return retval; > +} > + > static int invalid_op_interception(struct vcpu_svm *svm, > struct kvm_run *kvm_run) > { > -- > 1.5.6 ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 2/9] Add helper functions for nested SVM v5 2008-10-20 17:04 ` [PATCH 2/9] Add helper functions for nested SVM v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf 2008-10-29 13:48 ` [PATCH 2/9] Add helper functions for nested SVM v5 Joerg Roedel @ 2008-10-30 17:56 ` Anthony Liguori 2008-10-30 18:41 ` Alexander Graf 2 siblings, 1 reply; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 17:56 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > These are helpers for the nested SVM implementation. > > - nsvm_printk implements a debug printk variant > - nested_svm_do calls a handler that can accesses gpa-based memory > > v3 makes use of the new permission checker > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 88 ++++++++++++++++++++++++++++++++++++++++++++++++++++ > 1 files changed, 88 insertions(+), 0 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 4ee5376..a00421b 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -48,6 +48,16 @@ MODULE_LICENSE("GPL"); > > #define DEBUGCTL_RESERVED_BITS (~(0x3fULL)) > > +/* Turn on to get debugging output*/ > +/* #define NESTED_DEBUG */ > + > +#ifdef NESTED_DEBUG > +#define nsvm_printk(fmt, args...) printk(KERN_INFO fmt, ## args) > +#else > +static inline void nsvm_printk(char *fmt, ...) { > +} > +#endif > > I understand why you have this form, but stylistically the '{' should come on the next line. Personally, I'd stick with the standard do {} while(0) just to avoid confusion although in the context of the kernel, I think pr_debug() would be better although a little more annoying to use. > /* enable NPT for AMD64 and X86 with PAE */ > #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) > static bool npt_enabled = true; > @@ -1145,6 +1155,84 @@ static int vmmcall_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > return 1; > } > > +static int nested_svm_check_permissions(struct vcpu_svm *svm) > +{ > + if (svm->vmcb->save.cpl) { > + printk(KERN_ERR "%s: invalid cpl 0x%x at ip 0x%lx\n", > + __func__, svm->vmcb->save.cpl, kvm_rip_read(&svm->vcpu)); > + kvm_queue_exception(&svm->vcpu, GP_VECTOR); > GPFs need an error code. Do you really think a GP should be delivered before checking SVME though? I think you ought to switch the order of these checks. > + return 1; > + } > + > + if (!(svm->vcpu.arch.shadow_efer & MSR_EFER_SVME_MASK) > + || !is_paging(&svm->vcpu)) { > + kvm_queue_exception(&svm->vcpu, UD_VECTOR); > + return 1; > + } > + > + return 0; > +} > + > +static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 gpa) > +{ > + struct page *page; > + > + down_read(¤t->mm->mmap_sem); > + page = gfn_to_page(svm->vcpu.kvm, gpa >> PAGE_SHIFT); > + up_read(¤t->mm->mmap_sem); > + > + if (is_error_page(page)) { > + printk(KERN_ERR "%s: could not find page at 0x%llx\n", > + __func__, gpa); > + kvm_release_page_clean(page); > + kvm_queue_exception(&svm->vcpu, GP_VECTOR); > + return NULL; > + } > + return page; > +} > + > +static int nested_svm_do(struct vcpu_svm *svm, > + u64 arg1_gpa, u64 arg2_gpa, void *opaque, > + int (*handler)(struct vcpu_svm *svm, > + void *arg1, > + void *arg2, > + void *opaque)) > +{ > + struct page *arg1_page; > + struct page *arg2_page = NULL; > + void *arg1; > + void *arg2 = NULL; > + int retval; > + > + arg1_page = nested_svm_get_page(svm, arg1_gpa); > + if(arg1_page == NULL) > + return 1; > + > + if (arg2_gpa) { > + arg2_page = nested_svm_get_page(svm, arg2_gpa); > + if(arg2_page == NULL) { > + kvm_release_page_clean(arg1_page); > + return 1; > + } > + } > + > + arg1 = kmap_atomic(arg1_page, KM_USER0); > + if (arg2_gpa) > + arg2 = kmap_atomic(arg2_page, KM_USER1); > + > + retval = handler(svm, arg1, arg2, opaque); > + > + kunmap_atomic(arg1, KM_USER0); > + if (arg2_gpa) > + kunmap_atomic(arg2, KM_USER1); > + > + kvm_release_page_dirty(arg1_page); > + if (arg2_gpa) > + kvm_release_page_dirty(arg2_page); > + > + return retval; > +} > + > I appreciate small patches but introducing statics that aren't used is going to generate warnings when bisecting. I think that suggests your splitting things at the wrong level. Regards, Anthony Liguori > static int invalid_op_interception(struct vcpu_svm *svm, > struct kvm_run *kvm_run) > { > ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 2/9] Add helper functions for nested SVM v5 2008-10-30 17:56 ` Anthony Liguori @ 2008-10-30 18:41 ` Alexander Graf 2008-10-30 19:10 ` Anthony Liguori 0 siblings, 1 reply; 53+ messages in thread From: Alexander Graf @ 2008-10-30 18:41 UTC (permalink / raw) To: Anthony Liguori; +Cc: kvm, joro, avi On 30.10.2008, at 18:56, Anthony Liguori wrote: > Alexander Graf wrote: >> These are helpers for the nested SVM implementation. >> >> - nsvm_printk implements a debug printk variant >> - nested_svm_do calls a handler that can accesses gpa-based memory >> >> v3 makes use of the new permission checker >> >> Signed-off-by: Alexander Graf <agraf@suse.de> >> --- >> arch/x86/kvm/svm.c | 88 ++++++++++++++++++++++++++++++++++++++++++ >> ++++++++++ >> 1 files changed, 88 insertions(+), 0 deletions(-) >> >> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c >> index 4ee5376..a00421b 100644 >> --- a/arch/x86/kvm/svm.c >> +++ b/arch/x86/kvm/svm.c >> @@ -48,6 +48,16 @@ MODULE_LICENSE("GPL"); >> #define DEBUGCTL_RESERVED_BITS (~(0x3fULL)) >> +/* Turn on to get debugging output*/ >> +/* #define NESTED_DEBUG */ >> + >> +#ifdef NESTED_DEBUG >> +#define nsvm_printk(fmt, args...) printk(KERN_INFO fmt, ## args) >> +#else >> +static inline void nsvm_printk(char *fmt, ...) { >> +} >> +#endif >> >> > > I understand why you have this form, but stylistically the '{' > should come on the next line. Personally, I'd stick with the > standard do {} while(0) just to avoid confusion although in the > context of the kernel, I think pr_debug() would be better although a > little more annoying to use. A define to do {} while(0) sounds good. I'll do that. > > >> /* enable NPT for AMD64 and X86 with PAE */ >> #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) >> static bool npt_enabled = true; >> @@ -1145,6 +1155,84 @@ static int vmmcall_interception(struct >> vcpu_svm *svm, struct kvm_run *kvm_run) >> return 1; >> } >> +static int nested_svm_check_permissions(struct vcpu_svm *svm) >> +{ >> + if (svm->vmcb->save.cpl) { >> + printk(KERN_ERR "%s: invalid cpl 0x%x at ip 0x%lx\n", >> + __func__, svm->vmcb->save.cpl, kvm_rip_read(&svm->vcpu)); >> + kvm_queue_exception(&svm->vcpu, GP_VECTOR); >> > > GPFs need an error code. Do you really think a GP should be > delivered before checking SVME though? I think you ought to switch > the order of these checks. Nice catch. The spec also says SVME is checked before CPL. What error code exactly would we need here? >> + return 1; >> + } >> + >> + if (!(svm->vcpu.arch.shadow_efer & MSR_EFER_SVME_MASK) >> + || !is_paging(&svm->vcpu)) { >> + kvm_queue_exception(&svm->vcpu, UD_VECTOR); >> + return 1; >> + } >> + >> + return 0; >> +} >> + >> +static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 >> gpa) >> +{ >> + struct page *page; >> + >> + down_read(¤t->mm->mmap_sem); >> + page = gfn_to_page(svm->vcpu.kvm, gpa >> PAGE_SHIFT); >> + up_read(¤t->mm->mmap_sem); >> + >> + if (is_error_page(page)) { >> + printk(KERN_ERR "%s: could not find page at 0x%llx\n", >> + __func__, gpa); >> + kvm_release_page_clean(page); >> + kvm_queue_exception(&svm->vcpu, GP_VECTOR); >> + return NULL; >> + } >> + return page; >> +} >> + >> +static int nested_svm_do(struct vcpu_svm *svm, >> + u64 arg1_gpa, u64 arg2_gpa, void *opaque, >> + int (*handler)(struct vcpu_svm *svm, >> + void *arg1, >> + void *arg2, >> + void *opaque)) >> +{ >> + struct page *arg1_page; >> + struct page *arg2_page = NULL; >> + void *arg1; >> + void *arg2 = NULL; >> + int retval; >> + >> + arg1_page = nested_svm_get_page(svm, arg1_gpa); >> + if(arg1_page == NULL) >> + return 1; >> + >> + if (arg2_gpa) { >> + arg2_page = nested_svm_get_page(svm, arg2_gpa); >> + if(arg2_page == NULL) { >> + kvm_release_page_clean(arg1_page); >> + return 1; >> + } >> + } >> + >> + arg1 = kmap_atomic(arg1_page, KM_USER0); >> + if (arg2_gpa) >> + arg2 = kmap_atomic(arg2_page, KM_USER1); >> + >> + retval = handler(svm, arg1, arg2, opaque); >> + >> + kunmap_atomic(arg1, KM_USER0); >> + if (arg2_gpa) >> + kunmap_atomic(arg2, KM_USER1); >> + >> + kvm_release_page_dirty(arg1_page); >> + if (arg2_gpa) >> + kvm_release_page_dirty(arg2_page); >> + >> + return retval; >> +} >> + >> > > I appreciate small patches but introducing statics that aren't used > is going to generate warnings when bisecting. I think that suggests > your splitting things at the wrong level. I figured warnings are nicer than having a blown-up patch. These functions are basically used within all patches from that one on, so it felt logical to split them up this way. How would you have split them up? Alex ^ permalink raw reply [flat|nested] 53+ messages in thread
* Re: [PATCH 2/9] Add helper functions for nested SVM v5 2008-10-30 18:41 ` Alexander Graf @ 2008-10-30 19:10 ` Anthony Liguori 0 siblings, 0 replies; 53+ messages in thread From: Anthony Liguori @ 2008-10-30 19:10 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joro, avi Alexander Graf wrote: > > On 30.10.2008, at 18:56, Anthony Liguori wrote: > >> >> >>> /* enable NPT for AMD64 and X86 with PAE */ >>> #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) >>> static bool npt_enabled = true; >>> @@ -1145,6 +1155,84 @@ static int vmmcall_interception(struct >>> vcpu_svm *svm, struct kvm_run *kvm_run) >>> return 1; >>> } >>> +static int nested_svm_check_permissions(struct vcpu_svm *svm) >>> +{ >>> + if (svm->vmcb->save.cpl) { >>> + printk(KERN_ERR "%s: invalid cpl 0x%x at ip 0x%lx\n", >>> + __func__, svm->vmcb->save.cpl, >>> kvm_rip_read(&svm->vcpu)); >>> + kvm_queue_exception(&svm->vcpu, GP_VECTOR); >>> >> >> GPFs need an error code. Do you really think a GP should be >> delivered before checking SVME though? I think you ought to switch >> the order of these checks. > > Nice catch. The spec also says SVME is checked before CPL. > What error code exactly would we need here? I don't know, I would have to look at the spec. Joerg may know. >>> + return 1; >>> + } >>> + >>> + if (!(svm->vcpu.arch.shadow_efer & MSR_EFER_SVME_MASK) >>> + || !is_paging(&svm->vcpu)) { >>> + kvm_queue_exception(&svm->vcpu, UD_VECTOR); >>> + return 1; >>> + } >>> + >>> + return 0; >>> +} >>> + >>> +static struct page *nested_svm_get_page(struct vcpu_svm *svm, u64 gpa) >>> +{ >>> + struct page *page; >>> + >>> + down_read(¤t->mm->mmap_sem); >>> + page = gfn_to_page(svm->vcpu.kvm, gpa >> PAGE_SHIFT); >>> + up_read(¤t->mm->mmap_sem); >>> + >>> + if (is_error_page(page)) { >>> + printk(KERN_ERR "%s: could not find page at 0x%llx\n", >>> + __func__, gpa); >>> + kvm_release_page_clean(page); >>> + kvm_queue_exception(&svm->vcpu, GP_VECTOR); >>> + return NULL; >>> + } >>> + return page; >>> +} >>> + >>> +static int nested_svm_do(struct vcpu_svm *svm, >>> + u64 arg1_gpa, u64 arg2_gpa, void *opaque, >>> + int (*handler)(struct vcpu_svm *svm, >>> + void *arg1, >>> + void *arg2, >>> + void *opaque)) >>> +{ >>> + struct page *arg1_page; >>> + struct page *arg2_page = NULL; >>> + void *arg1; >>> + void *arg2 = NULL; >>> + int retval; >>> + >>> + arg1_page = nested_svm_get_page(svm, arg1_gpa); >>> + if(arg1_page == NULL) >>> + return 1; >>> + >>> + if (arg2_gpa) { >>> + arg2_page = nested_svm_get_page(svm, arg2_gpa); >>> + if(arg2_page == NULL) { >>> + kvm_release_page_clean(arg1_page); >>> + return 1; >>> + } >>> + } >>> + >>> + arg1 = kmap_atomic(arg1_page, KM_USER0); >>> + if (arg2_gpa) >>> + arg2 = kmap_atomic(arg2_page, KM_USER1); >>> + >>> + retval = handler(svm, arg1, arg2, opaque); >>> + >>> + kunmap_atomic(arg1, KM_USER0); >>> + if (arg2_gpa) >>> + kunmap_atomic(arg2, KM_USER1); >>> + >>> + kvm_release_page_dirty(arg1_page); >>> + if (arg2_gpa) >>> + kvm_release_page_dirty(arg2_page); >>> + >>> + return retval; >>> +} >>> + >>> >> >> I appreciate small patches but introducing statics that aren't used >> is going to generate warnings when bisecting. I think that suggests >> your splitting things at the wrong level. > > I figured warnings are nicer than having a blown-up patch. These > functions are basically used within all patches from that one on, so > it felt logical to split them up this way. How would you have split > them up? I would have folded them into the first user but this is just an issue of personal preference I think. Regards, Anthony Liguori > Alex > ^ permalink raw reply [flat|nested] 53+ messages in thread
end of thread, other threads:[~2008-11-18 14:14 UTC | newest] Thread overview: 53+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2008-10-20 17:04 [PATCH 0/9] Add support for nested SVM (kernel) v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 1/9] Clean up VINTR setting v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 2/9] Add helper functions for nested SVM v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 3/9] Implement GIF, clgi and stgi v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 4/9] Implement hsave v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 6/9] Add VMRUN handler v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 7/9] Add VMEXIT handler and intercepts v5 Alexander Graf 2008-10-20 17:04 ` [PATCH 8/9] allow read access to MSR_VM_VR Alexander Graf 2008-10-20 17:04 ` [PATCH 9/9] Allow setting the SVME bit v5 Alexander Graf 2008-10-29 13:58 ` Joerg Roedel 2008-10-29 14:03 ` Alexander Graf 2008-10-29 14:17 ` Avi Kivity 2008-10-29 14:07 ` Avi Kivity 2008-10-30 19:16 ` Anthony Liguori 2008-10-30 19:24 ` Avi Kivity 2008-10-30 20:46 ` Alexander Graf 2008-10-30 20:44 ` Alexander Graf 2008-10-30 20:52 ` Anthony Liguori 2008-11-02 9:11 ` Avi Kivity 2008-11-03 7:37 ` Alexander Graf 2008-11-03 13:54 ` Avi Kivity 2008-10-30 19:15 ` [PATCH 8/9] allow read access to MSR_VM_VR Anthony Liguori 2008-10-31 10:57 ` Joerg Roedel 2008-10-28 18:38 ` Add VMRUN handler v5 Mike Day 2008-10-29 5:35 ` Alexander Graf 2008-11-18 14:14 ` [PATCH 6/9] " Muli Ben-Yehuda 2008-10-30 18:06 ` [PATCH 5/9] Add VMLOAD and VMSAVE handlers v5 Anthony Liguori 2008-10-30 18:44 ` Alexander Graf 2008-10-30 19:14 ` Anthony Liguori 2008-10-30 21:02 ` Alexander Graf 2008-10-30 21:38 ` Anthony Liguori 2008-10-30 18:04 ` [PATCH 4/9] Implement hsave v5 Anthony Liguori 2008-10-30 18:43 ` Alexander Graf 2008-10-30 19:05 ` Anthony Liguori 2008-10-30 19:29 ` Avi Kivity 2008-10-30 20:38 ` Alexander Graf 2008-10-30 20:44 ` Anthony Liguori 2008-10-30 20:47 ` Alexander Graf 2008-10-30 22:05 ` Anthony Liguori 2008-11-02 9:24 ` Avi Kivity 2008-10-27 19:09 ` Implement GIF, clgi and stgi v5 Mike Day 2008-10-27 19:29 ` Avi Kivity 2008-10-27 19:40 ` Mike Day 2008-10-30 18:02 ` [PATCH 3/9] " Anthony Liguori 2008-10-30 18:10 ` Avi Kivity 2008-10-30 18:35 ` Alexander Graf 2008-10-30 19:08 ` Anthony Liguori 2008-10-30 19:11 ` Anthony Liguori 2008-10-29 13:48 ` [PATCH 2/9] Add helper functions for nested SVM v5 Joerg Roedel 2008-10-30 17:56 ` Anthony Liguori 2008-10-30 18:41 ` Alexander Graf 2008-10-30 19:10 ` Anthony Liguori
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox