* [PATCH 0/6] Add rudimentary Hyper-V guest support @ 2009-05-15 8:22 Alexander Graf 2009-05-15 8:22 ` [PATCH 1/6] Add definition for IGNNE MSR Alexander Graf 2009-05-15 10:47 ` [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf 0 siblings, 2 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel Now that we have nested SVM in place, let's make use of it and virtualize something non-kvm. The first interesting target that came to my mind here was Hyper-V. This patchset makes Windows Server 2008 boot with Hyper-V, which runs the "dom0" in virtualized mode already. I haven't been able to run a second VM within for now though, but maybe I just wasn't patient enough ;-). Alexander Graf (6): Add definition for IGNNE MSR MMU: don't bail on PAT bits in PTE Emulator: Inject #PF when page was not found Implement Hyper-V MSRs Nested SVM: Implement INVLPGA Nested SVM: Improve interrupt injection arch/x86/include/asm/msr-index.h | 1 + arch/x86/kvm/mmu.c | 2 +- arch/x86/kvm/svm.c | 59 +++++++++++++++++++++++++++---------- arch/x86/kvm/x86.c | 7 +++- 4 files changed, 50 insertions(+), 19 deletions(-) ^ permalink raw reply [flat|nested] 41+ messages in thread
* [PATCH 1/6] Add definition for IGNNE MSR 2009-05-15 8:22 [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf @ 2009-05-15 8:22 ` Alexander Graf 2009-05-15 8:22 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Alexander Graf 2009-05-15 10:47 ` [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf 1 sibling, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel Hyper-V tries to access MSR_IGNNE, so let's at least have a definition for it in our headers. Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/include/asm/msr-index.h | 1 + 1 files changed, 1 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index ec41fc1..e273549 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -372,6 +372,7 @@ /* AMD-V MSRs */ #define MSR_VM_CR 0xc0010114 +#define MSR_VM_IGNNE 0xc0010115 #define MSR_VM_HSAVE_PA 0xc0010117 #endif /* _ASM_X86_MSR_INDEX_H */ -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 2/6] MMU: don't bail on PAT bits in PTE 2009-05-15 8:22 ` [PATCH 1/6] Add definition for IGNNE MSR Alexander Graf @ 2009-05-15 8:22 ` Alexander Graf 2009-05-15 8:22 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Alexander Graf 2009-05-15 10:25 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Michael S. Tsirkin 0 siblings, 2 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel A 64bit PTE can have bit7 set to 1 which means "Use this bit for the PAT". Currently KVM's MMU code treats this bit as reserved, even though it's not. As long as we're not required to make use of the PAT bits which is only required for DMA/MMIO from my understanding, we can safely ignore it. Hyper-V uses this bit for kernel PTEs. Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/mmu.c | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 8fcdae9..cce055a 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2169,7 +2169,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) context->rsvd_bits_mask[1][1] = exb_bit_rsvd | rsvd_bits(maxphyaddr, 51) | rsvd_bits(13, 20); /* large page */ - context->rsvd_bits_mask[1][0] = ~0ull; + context->rsvd_bits_mask[1][0] = 0ull; break; } } -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 3/6] Emulator: Inject #PF when page was not found 2009-05-15 8:22 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Alexander Graf @ 2009-05-15 8:22 ` Alexander Graf 2009-05-15 8:22 ` [PATCH 4/6] Implement Hyper-V MSRs Alexander Graf ` (2 more replies) 2009-05-15 10:25 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Michael S. Tsirkin 1 sibling, 3 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel If we couldn't find a page on read_emulated, it might be a good idea to tell the guest about that and inject a #PF. We do the same already for write faults. I don't know why it was not implemented for reads. Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/x86.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5fcde2c..5aa1219 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -2131,10 +2131,13 @@ static int emulator_read_emulated(unsigned long addr, goto mmio; if (kvm_read_guest_virt(addr, val, bytes, vcpu) - == X86EMUL_CONTINUE) + == X86EMUL_CONTINUE) { return X86EMUL_CONTINUE; - if (gpa == UNMAPPED_GVA) + } + if (gpa == UNMAPPED_GVA) { + kvm_inject_page_fault(vcpu, addr, 0); return X86EMUL_PROPAGATE_FAULT; + } mmio: /* -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 4/6] Implement Hyper-V MSRs 2009-05-15 8:22 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Alexander Graf @ 2009-05-15 8:22 ` Alexander Graf 2009-05-15 8:22 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Alexander Graf 2009-05-17 9:54 ` [PATCH 4/6] Implement Hyper-V MSRs Avi Kivity 2009-05-15 13:40 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Joerg Roedel 2009-05-17 19:59 ` Avi Kivity 2 siblings, 2 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel Hyper-V uses some MSRs, some of which are actually reserved for BIOS usage. But let's be nice today and have it its way, because otherwise it fails terribly. For MSRs where I could find a name I used the name, otherwise they're just added in their hex form for now. Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index ef43a18..30e6b43 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1932,6 +1932,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) *data = svm->hsave_msr; break; case MSR_VM_CR: + case 0x40000081: *data = 0; break; case MSR_IA32_UCODE_REV: @@ -2034,6 +2035,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data) case MSR_VM_HSAVE_PA: svm->hsave_msr = data; break; + case MSR_VM_CR: + case MSR_VM_IGNNE: + case MSR_K8_HWCR: + break; default: return kvm_set_msr_common(vcpu, ecx, data); } -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 5/6] Nested SVM: Implement INVLPGA 2009-05-15 8:22 ` [PATCH 4/6] Implement Hyper-V MSRs Alexander Graf @ 2009-05-15 8:22 ` Alexander Graf 2009-05-15 8:22 ` [PATCH 6/6] Nested SVM: Improve interrupt injection Alexander Graf 2009-05-15 13:43 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Joerg Roedel 2009-05-17 9:54 ` [PATCH 4/6] Implement Hyper-V MSRs Avi Kivity 1 sibling, 2 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel SVM adds another way to do INVLPG by ASID which Hyper-V makes use of, so let's implement it! For now we just do the same thing invlpg does, as asid switching means we flush the mmu anyways. That might change one day though. Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 14 +++++++++++++- 1 files changed, 13 insertions(+), 1 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 30e6b43..b2c6cf3 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1785,6 +1785,18 @@ static int clgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) return 1; } +static int invlpga_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) +{ + struct kvm_vcpu *vcpu = &svm->vcpu; + nsvm_printk("INVLPGA\n"); + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; + skip_emulated_instruction(&svm->vcpu); + + kvm_mmu_reset_context(vcpu); + kvm_mmu_load(vcpu); + return 1; +} + static int invalid_op_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) { @@ -2130,7 +2142,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm, [SVM_EXIT_INVD] = emulate_on_interception, [SVM_EXIT_HLT] = halt_interception, [SVM_EXIT_INVLPG] = invlpg_interception, - [SVM_EXIT_INVLPGA] = invalid_op_interception, + [SVM_EXIT_INVLPGA] = invlpga_interception, [SVM_EXIT_IOIO] = io_interception, [SVM_EXIT_MSR] = msr_interception, [SVM_EXIT_TASK_SWITCH] = task_switch_interception, -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* [PATCH 6/6] Nested SVM: Improve interrupt injection 2009-05-15 8:22 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Alexander Graf @ 2009-05-15 8:22 ` Alexander Graf 2009-05-17 6:48 ` Gleb Natapov 2009-05-15 13:43 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Joerg Roedel 1 sibling, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-15 8:22 UTC (permalink / raw) To: kvm; +Cc: joerg.roedel While trying to get Hyper-V running, I realized that the interrupt injection mechanisms that are in place right now are not 100% correct. This patch makes nested SVM's interrupt injection behave more like on a real machine. Signed-off-by: Alexander Graf <agraf@suse.de> --- arch/x86/kvm/svm.c | 40 +++++++++++++++++++++++++--------------- 1 files changed, 25 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index b2c6cf3..1d22d46 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -1517,7 +1517,8 @@ static int nested_svm_vmexit_real(struct vcpu_svm *svm, void *arg1, /* Kill any pending exceptions */ if (svm->vcpu.arch.exception.pending == true) nsvm_printk("WARNING: Pending Exception\n"); - svm->vcpu.arch.exception.pending = false; + kvm_clear_exception_queue(&svm->vcpu); + kvm_clear_interrupt_queue(&svm->vcpu); /* Restore selected save entries */ svm->vmcb->save.es = hsave->save.es; @@ -1585,7 +1586,8 @@ static int nested_svm_vmrun(struct vcpu_svm *svm, void *arg1, svm->nested_vmcb = svm->vmcb->save.rax; /* Clear internal status */ - svm->vcpu.arch.exception.pending = false; + kvm_clear_exception_queue(&svm->vcpu); + kvm_clear_interrupt_queue(&svm->vcpu); /* Save the old vmcb, so we don't need to pick what we save, but can restore everything when a VMEXIT occurs */ @@ -2276,21 +2278,15 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq) ((/*control->int_vector >> 4*/ 0xf) << V_INTR_PRIO_SHIFT); } -static void svm_queue_irq(struct kvm_vcpu *vcpu, unsigned nr) -{ - struct vcpu_svm *svm = to_svm(vcpu); - - svm->vmcb->control.event_inj = nr | - SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR; -} - static void svm_set_irq(struct kvm_vcpu *vcpu, int irq) { struct vcpu_svm *svm = to_svm(vcpu); - nested_svm_intr(svm); + if(!(svm->vcpu.arch.hflags & HF_GIF_MASK)) + return; - svm_queue_irq(vcpu, irq); + svm->vmcb->control.event_inj = irq | + SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR; } static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) @@ -2318,13 +2314,25 @@ static int svm_interrupt_allowed(struct kvm_vcpu *vcpu) struct vmcb *vmcb = svm->vmcb; return (vmcb->save.rflags & X86_EFLAGS_IF) && !(vmcb->control.int_state & SVM_INTERRUPT_SHADOW_MASK) && - (svm->vcpu.arch.hflags & HF_GIF_MASK); + (svm->vcpu.arch.hflags & HF_GIF_MASK) && + !is_nested(svm); } static void enable_irq_window(struct kvm_vcpu *vcpu) { - svm_set_vintr(to_svm(vcpu)); - svm_inject_irq(to_svm(vcpu), 0x0); + struct vcpu_svm *svm = to_svm(vcpu); + nsvm_printk("Trying to open IRQ window\n"); + + nested_svm_intr(svm); + + /* In case GIF=0 we can't rely on the CPU to tell us when + * GIF becomes 1, because that's a separate STGI/VMRUN intercept. + * The next time we get that intercept, this function will be + * called again though and we'll get the vintr intercept. */ + if (svm->vcpu.arch.hflags & HF_GIF_MASK) { + svm_set_vintr(svm); + svm_inject_irq(svm, 0x0); + } } static void enable_nmi_window(struct kvm_vcpu *vcpu) @@ -2392,6 +2400,8 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) case SVM_EXITINTINFO_TYPE_EXEPT: /* In case of software exception do not reinject an exception vector, but re-execute and instruction instead */ + if (is_nested(svm)) + break; if (vector == BP_VECTOR || vector == OF_VECTOR) break; if (exitintinfo & SVM_EXITINTINFO_VALID_ERR) { -- 1.6.0.2 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH 6/6] Nested SVM: Improve interrupt injection 2009-05-15 8:22 ` [PATCH 6/6] Nested SVM: Improve interrupt injection Alexander Graf @ 2009-05-17 6:48 ` Gleb Natapov 2009-05-17 8:10 ` Alexander Graf 2009-05-18 11:47 ` Alexander Graf 0 siblings, 2 replies; 41+ messages in thread From: Gleb Natapov @ 2009-05-17 6:48 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joerg.roedel On Fri, May 15, 2009 at 10:22:20AM +0200, Alexander Graf wrote: > static void svm_set_irq(struct kvm_vcpu *vcpu, int irq) > { > struct vcpu_svm *svm = to_svm(vcpu); > > - nested_svm_intr(svm); > + if(!(svm->vcpu.arch.hflags & HF_GIF_MASK)) > + return; > Why would this function be called if HF_GIF_MASK is not set? This check is done in svm_interrupt_allowed(). > - svm_queue_irq(vcpu, irq); > + svm->vmcb->control.event_inj = irq | > + SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR; > } > -- Gleb. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 6/6] Nested SVM: Improve interrupt injection 2009-05-17 6:48 ` Gleb Natapov @ 2009-05-17 8:10 ` Alexander Graf 2009-05-18 11:47 ` Alexander Graf 1 sibling, 0 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-17 8:10 UTC (permalink / raw) To: Gleb Natapov; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com On 17.05.2009, at 08:48, Gleb Natapov <gleb@redhat.com> wrote: > On Fri, May 15, 2009 at 10:22:20AM +0200, Alexander Graf wrote: >> static void svm_set_irq(struct kvm_vcpu *vcpu, int irq) >> { >> struct vcpu_svm *svm = to_svm(vcpu); >> >> - nested_svm_intr(svm); >> + if(!(svm->vcpu.arch.hflags & HF_GIF_MASK)) >> + return; >> > Why would this function be called if HF_GIF_MASK is not set? This > check is done in svm_interrupt_allowed(). I agree it shouldn't but I don't remember why exactly it triggered here. I'll try and get a backtrace tomorrow. Either way, if it's not a return, it's a BUG(). Alex > > >> - svm_queue_irq(vcpu, irq); >> + svm->vmcb->control.event_inj = irq | >> + SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR; >> } >> > > -- > Gleb. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 6/6] Nested SVM: Improve interrupt injection 2009-05-17 6:48 ` Gleb Natapov 2009-05-17 8:10 ` Alexander Graf @ 2009-05-18 11:47 ` Alexander Graf 1 sibling, 0 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-18 11:47 UTC (permalink / raw) To: Gleb Natapov; +Cc: kvm, joerg.roedel On 17.05.2009, at 08:48, Gleb Natapov wrote: > On Fri, May 15, 2009 at 10:22:20AM +0200, Alexander Graf wrote: >> static void svm_set_irq(struct kvm_vcpu *vcpu, int irq) >> { >> struct vcpu_svm *svm = to_svm(vcpu); >> >> - nested_svm_intr(svm); >> + if(!(svm->vcpu.arch.hflags & HF_GIF_MASK)) >> + return; >> > Why would this function be called if HF_GIF_MASK is not set? This > check is done in svm_interrupt_allowed(). Looks like I was doing something odd - WARN_ON doesn't trigger (which is reasonable). I think I put the check in when I still had nested_svm_intr() before, because that would unset GIF. Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 5/6] Nested SVM: Implement INVLPGA 2009-05-15 8:22 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Alexander Graf 2009-05-15 8:22 ` [PATCH 6/6] Nested SVM: Improve interrupt injection Alexander Graf @ 2009-05-15 13:43 ` Joerg Roedel 2009-05-17 20:02 ` Avi Kivity 2009-05-18 13:00 ` Alexander Graf 1 sibling, 2 replies; 41+ messages in thread From: Joerg Roedel @ 2009-05-15 13:43 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm On Fri, May 15, 2009 at 10:22:19AM +0200, Alexander Graf wrote: > SVM adds another way to do INVLPG by ASID which Hyper-V makes use of, > so let's implement it! > > For now we just do the same thing invlpg does, as asid switching > means we flush the mmu anyways. That might change one day though. > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/svm.c | 14 +++++++++++++- > 1 files changed, 13 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 30e6b43..b2c6cf3 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -1785,6 +1785,18 @@ static int clgi_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > return 1; > } > > +static int invlpga_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) > +{ > + struct kvm_vcpu *vcpu = &svm->vcpu; > + nsvm_printk("INVLPGA\n"); > + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; > + skip_emulated_instruction(&svm->vcpu); > + > + kvm_mmu_reset_context(vcpu); > + kvm_mmu_load(vcpu); > + return 1; > +} > + Hmm, since we flush the TLB on every nested-guest entry I think we can make this function a nop. > static int invalid_op_interception(struct vcpu_svm *svm, > struct kvm_run *kvm_run) > { > @@ -2130,7 +2142,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm, > [SVM_EXIT_INVD] = emulate_on_interception, > [SVM_EXIT_HLT] = halt_interception, > [SVM_EXIT_INVLPG] = invlpg_interception, > - [SVM_EXIT_INVLPGA] = invalid_op_interception, > + [SVM_EXIT_INVLPGA] = invlpga_interception, > [SVM_EXIT_IOIO] = io_interception, > [SVM_EXIT_MSR] = msr_interception, > [SVM_EXIT_TASK_SWITCH] = task_switch_interception, > -- > 1.6.0.2 > > -- | Advanced Micro Devices GmbH Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München System | Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis München | Registergericht München, HRB Nr. 43632 ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 5/6] Nested SVM: Implement INVLPGA 2009-05-15 13:43 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Joerg Roedel @ 2009-05-17 20:02 ` Avi Kivity 2009-05-17 20:03 ` Avi Kivity 2009-05-18 13:00 ` Alexander Graf 1 sibling, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-17 20:02 UTC (permalink / raw) To: Joerg Roedel; +Cc: Alexander Graf, kvm Joerg Roedel wrote: > On Fri, May 15, 2009 at 10:22:19AM +0200, Alexander Graf wrote: > >> SVM adds another way to do INVLPG by ASID which Hyper-V makes use of, >> so let's implement it! >> >> For now we just do the same thing invlpg does, as asid switching >> means we flush the mmu anyways. That might change one day though. >> >> >> +static int invlpga_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run) >> +{ >> + struct kvm_vcpu *vcpu = &svm->vcpu; >> + nsvm_printk("INVLPGA\n"); >> + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; >> + skip_emulated_instruction(&svm->vcpu); >> + >> + kvm_mmu_reset_context(vcpu); >> + kvm_mmu_load(vcpu); >> + return 1; >> +} >> + >> > > Hmm, since we flush the TLB on every nested-guest entry I think we can > make this function a nop. > I think, unless it specified ASID 0? In that case you need a local tlb flush. (the kvm_mmu_reset_context() and kvm_mmu_load() are total overkills in any case). -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 5/6] Nested SVM: Implement INVLPGA 2009-05-17 20:02 ` Avi Kivity @ 2009-05-17 20:03 ` Avi Kivity 2009-05-18 18:46 ` Marcelo Tosatti 0 siblings, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-17 20:03 UTC (permalink / raw) To: Joerg Roedel; +Cc: Alexander Graf, kvm Avi Kivity wrote: >> >> Hmm, since we flush the TLB on every nested-guest entry I think we can >> make this function a nop. >> > > I think, unless it specified ASID 0? In that case you need a local > tlb flush. > > (the kvm_mmu_reset_context() and kvm_mmu_load() are total overkills in > any case). > Oh, but we do need to resync OOS pages, here for ASID 0 and on guest entry. Marcelo? -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 5/6] Nested SVM: Implement INVLPGA 2009-05-17 20:03 ` Avi Kivity @ 2009-05-18 18:46 ` Marcelo Tosatti 0 siblings, 0 replies; 41+ messages in thread From: Marcelo Tosatti @ 2009-05-18 18:46 UTC (permalink / raw) To: Avi Kivity; +Cc: Joerg Roedel, Alexander Graf, kvm On Sun, May 17, 2009 at 11:03:52PM +0300, Avi Kivity wrote: > Avi Kivity wrote: >>> >>> Hmm, since we flush the TLB on every nested-guest entry I think we can >>> make this function a nop. >>> >> >> I think, unless it specified ASID 0? In that case you need a local >> tlb flush. >> >> (the kvm_mmu_reset_context() and kvm_mmu_load() are total overkills in >> any case). >> > > Oh, but we do need to resync OOS pages, here for ASID 0 and on guest > entry. Marcelo? Right, call kvm_mmu_invlpg() with the linear address passed to INVLPGA. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 5/6] Nested SVM: Implement INVLPGA 2009-05-15 13:43 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Joerg Roedel 2009-05-17 20:02 ` Avi Kivity @ 2009-05-18 13:00 ` Alexander Graf 1 sibling, 0 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-18 13:00 UTC (permalink / raw) To: Joerg Roedel; +Cc: kvm On 15.05.2009, at 15:43, Joerg Roedel wrote: > On Fri, May 15, 2009 at 10:22:19AM +0200, Alexander Graf wrote: >> SVM adds another way to do INVLPG by ASID which Hyper-V makes use of, >> so let's implement it! >> >> For now we just do the same thing invlpg does, as asid switching >> means we flush the mmu anyways. That might change one day though. >> >> Signed-off-by: Alexander Graf <agraf@suse.de> >> --- >> arch/x86/kvm/svm.c | 14 +++++++++++++- >> 1 files changed, 13 insertions(+), 1 deletions(-) >> >> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c >> index 30e6b43..b2c6cf3 100644 >> --- a/arch/x86/kvm/svm.c >> +++ b/arch/x86/kvm/svm.c >> @@ -1785,6 +1785,18 @@ static int clgi_interception(struct vcpu_svm >> *svm, struct kvm_run *kvm_run) >> return 1; >> } >> >> +static int invlpga_interception(struct vcpu_svm *svm, struct >> kvm_run *kvm_run) >> +{ >> + struct kvm_vcpu *vcpu = &svm->vcpu; >> + nsvm_printk("INVLPGA\n"); >> + svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; >> + skip_emulated_instruction(&svm->vcpu); >> + >> + kvm_mmu_reset_context(vcpu); >> + kvm_mmu_load(vcpu); >> + return 1; >> +} >> + > > Hmm, since we flush the TLB on every nested-guest entry I think we can > make this function a nop. Well we flush the TLB on every VMRUN, but this is still 100% within the 2nd level guest, so I think we should do something, no?. Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 4/6] Implement Hyper-V MSRs 2009-05-15 8:22 ` [PATCH 4/6] Implement Hyper-V MSRs Alexander Graf 2009-05-15 8:22 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Alexander Graf @ 2009-05-17 9:54 ` Avi Kivity 2009-05-17 19:57 ` Alexander Graf 1 sibling, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-17 9:54 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joerg.roedel Alexander Graf wrote: > Hyper-V uses some MSRs, some of which are actually reserved for BIOS usage. > > But let's be nice today and have it its way, because otherwise it fails > terribly. > > For MSRs where I could find a name I used the name, otherwise they're just > added in their hex form for now. > > Most of these are not Hyper-V MSRs. They are x86 MSRs that happen to be hit by Hyper-v. > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index ef43a18..30e6b43 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -1932,6 +1932,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 *data) > *data = svm->hsave_msr; > break; > case MSR_VM_CR: > + case 0x40000081: > *data = 0; > break; > This probably is a Hyper-V MSR, but I don't see how it expects it to be present in real hardware. Are you sure this is really needed? > @@ -2034,6 +2035,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, unsigned ecx, u64 data) > case MSR_VM_HSAVE_PA: > svm->hsave_msr = data; > break; > + case MSR_VM_CR: > + case MSR_VM_IGNNE: > + case MSR_K8_HWCR: > + break; > Please add a ratelimited printk() if any value is written which would cause behaviour which we do not emulate. This will prevent a guest getting unexpected behaviour silently. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 4/6] Implement Hyper-V MSRs 2009-05-17 9:54 ` [PATCH 4/6] Implement Hyper-V MSRs Avi Kivity @ 2009-05-17 19:57 ` Alexander Graf 2009-05-17 20:00 ` Avi Kivity 0 siblings, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-17 19:57 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com On 17.05.2009, at 11:54, Avi Kivity <avi@redhat.com> wrote: > Alexander Graf wrote: >> Hyper-V uses some MSRs, some of which are actually reserved for >> BIOS usage. >> >> But let's be nice today and have it its way, because otherwise it >> fails >> terribly. >> >> For MSRs where I could find a name I used the name, otherwise >> they're just >> added in their hex form for now. >> >> > > Most of these are not Hyper-V MSRs. They are x86 MSRs that happen > to be hit by Hyper-v. > > >> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c >> index ef43a18..30e6b43 100644 >> --- a/arch/x86/kvm/svm.c >> +++ b/arch/x86/kvm/svm.c >> @@ -1932,6 +1932,7 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, >> unsigned ecx, u64 *data) >> *data = svm->hsave_msr; >> break; >> case MSR_VM_CR: >> + case 0x40000081: >> *data = 0; >> break; >> > > This probably is a Hyper-V MSR, but I don't see how it expects it to > be present in real hardware. Are you sure this is really needed? Well hyper-v just crashes/reboots if it get a #gp on that msr, so I suppose yes. > > >> @@ -2034,6 +2035,10 @@ static int svm_set_msr(struct kvm_vcpu >> *vcpu, unsigned ecx, u64 data) >> case MSR_VM_HSAVE_PA: >> svm->hsave_msr = data; >> break; >> + case MSR_VM_CR: >> + case MSR_VM_IGNNE: >> + case MSR_K8_HWCR: >> + break; >> > > Please add a ratelimited printk() if any value is written which > would cause behaviour which we do not emulate. This will prevent a > guest getting unexpected behaviour silently. Right. Good catch. Alex > > > -- > error compiling committee.c: too many arguments to function > ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 4/6] Implement Hyper-V MSRs 2009-05-17 19:57 ` Alexander Graf @ 2009-05-17 20:00 ` Avi Kivity 2009-05-17 20:27 ` Alexander Graf 2009-05-17 20:37 ` Alexander Graf 0 siblings, 2 replies; 41+ messages in thread From: Avi Kivity @ 2009-05-17 20:00 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com Alexander Graf wrote: >>> case MSR_VM_CR: >>> + case 0x40000081: >>> *data = 0; >>> break; >>> >> >> This probably is a Hyper-V MSR, but I don't see how it expects it to >> be present in real hardware. Are you sure this is really needed? > > Well hyper-v just crashes/reboots if it get a #gp on that msr, so I > suppose yes. This is suspicious. It won't get this MSR on real hardware. Maybe this was with cpuid.hypervisor enabled? -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 4/6] Implement Hyper-V MSRs 2009-05-17 20:00 ` Avi Kivity @ 2009-05-17 20:27 ` Alexander Graf 2009-05-17 20:37 ` Alexander Graf 1 sibling, 0 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-17 20:27 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com On 17.05.2009, at 22:00, Avi Kivity <avi@redhat.com> wrote: > Alexander Graf wrote: >>>> case MSR_VM_CR: >>>> + case 0x40000081: >>>> *data = 0; >>>> break; >>>> >>> >>> This probably is a Hyper-V MSR, but I don't see how it expects it >>> to be present in real hardware. Are you sure this is really needed? >> >> Well hyper-v just crashes/reboots if it get a #gp on that msr, so I >> suppose yes. > > This is suspicious. It won't get this MSR on real hardware. > > Maybe this was with cpuid.hypervisor enabled? Before I sent out this patch I rechecked if the 0x4 msr is really required because it seemed awkward to me too and it did, but I can recheck for a 3rd time :) Alex > > > -- > Do not meddle in the internals of kernels, for they are subtle and > quick to panic. > ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 4/6] Implement Hyper-V MSRs 2009-05-17 20:00 ` Avi Kivity 2009-05-17 20:27 ` Alexander Graf @ 2009-05-17 20:37 ` Alexander Graf 1 sibling, 0 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-17 20:37 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com On 17.05.2009, at 22:00, Avi Kivity <avi@redhat.com> wrote: > Alexander Graf wrote: >>>> case MSR_VM_CR: >>>> + case 0x40000081: >>>> *data = 0; >>>> break; >>>> >>> >>> This probably is a Hyper-V MSR, but I don't see how it expects it >>> to be present in real hardware. Are you sure this is really needed? >> >> Well hyper-v just crashes/reboots if it get a #gp on that msr, so I >> suppose yes. > > This is suspicious. It won't get this MSR on real hardware. > > Maybe this was with cpuid.hypervisor enabled? Hm - seems to boot fine without. Oh well :). Alex > > > -- > Do not meddle in the internals of kernels, for they are subtle and > quick to panic. > ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 3/6] Emulator: Inject #PF when page was not found 2009-05-15 8:22 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Alexander Graf 2009-05-15 8:22 ` [PATCH 4/6] Implement Hyper-V MSRs Alexander Graf @ 2009-05-15 13:40 ` Joerg Roedel 2009-05-17 19:59 ` Avi Kivity 2 siblings, 0 replies; 41+ messages in thread From: Joerg Roedel @ 2009-05-15 13:40 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm On Fri, May 15, 2009 at 10:22:17AM +0200, Alexander Graf wrote: > If we couldn't find a page on read_emulated, it might be a good > idea to tell the guest about that and inject a #PF. > > We do the same already for write faults. I don't know why it was > not implemented for reads. Have you checked that the emulator will never ever do speculative reads? This may be the reason why the fault was not injected here. > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/x86.c | 7 +++++-- > 1 files changed, 5 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 5fcde2c..5aa1219 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -2131,10 +2131,13 @@ static int emulator_read_emulated(unsigned long addr, > goto mmio; > > if (kvm_read_guest_virt(addr, val, bytes, vcpu) > - == X86EMUL_CONTINUE) > + == X86EMUL_CONTINUE) { > return X86EMUL_CONTINUE; > - if (gpa == UNMAPPED_GVA) > + } > + if (gpa == UNMAPPED_GVA) { > + kvm_inject_page_fault(vcpu, addr, 0); > return X86EMUL_PROPAGATE_FAULT; > + } > > mmio: > /* > -- > 1.6.0.2 > > -- | Advanced Micro Devices GmbH Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München System | Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis München | Registergericht München, HRB Nr. 43632 ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 3/6] Emulator: Inject #PF when page was not found 2009-05-15 8:22 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Alexander Graf 2009-05-15 8:22 ` [PATCH 4/6] Implement Hyper-V MSRs Alexander Graf 2009-05-15 13:40 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Joerg Roedel @ 2009-05-17 19:59 ` Avi Kivity 2009-05-17 20:25 ` Alexander Graf 2 siblings, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-17 19:59 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joerg.roedel Alexander Graf wrote: > If we couldn't find a page on read_emulated, it might be a good > idea to tell the guest about that and inject a #PF. > > We do the same already for write faults. I don't know why it was > not implemented for reads. > > I can't think why it was done for writes. Normally, a guest page fault would be trapped and reflected a long time before emulation, in FNAME(page_fault)(), after walk_addr(). Can you give some details on the situation? What instruction was executed, and why kvm tried to emulate it? (I guess it depends on the relative priority of svm instruction intercepts and the page fault intercept?) -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 3/6] Emulator: Inject #PF when page was not found 2009-05-17 19:59 ` Avi Kivity @ 2009-05-17 20:25 ` Alexander Graf 2009-05-17 20:58 ` Avi Kivity 0 siblings, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-17 20:25 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com On 17.05.2009, at 21:59, Avi Kivity <avi@redhat.com> wrote: > Alexander Graf wrote: >> If we couldn't find a page on read_emulated, it might be a good >> idea to tell the guest about that and inject a #PF. >> >> We do the same already for write faults. I don't know why it was >> not implemented for reads. >> >> > > I can't think why it was done for writes. Normally, a guest page > fault would be trapped and reflected a long time before emulation, > in FNAME(page_fault)(), after walk_addr(). > > Can you give some details on the situation? What instruction was > executed, and why kvm tried to emulate it? I remember it was something about accessing the apic with npt. Maybe the real problem was the restricted bit checking that made the emulated instruction behave differently from the real mmu. I really need to start writing down why I did things when doing them :). I can recheck if it still breaks without the inject. Alex > > > (I guess it depends on the relative priority of svm instruction > intercepts and the page fault intercept?) > > -- > Do not meddle in the internals of kernels, for they are subtle and > quick to panic. > ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 3/6] Emulator: Inject #PF when page was not found 2009-05-17 20:25 ` Alexander Graf @ 2009-05-17 20:58 ` Avi Kivity 2009-05-18 12:55 ` Alexander Graf 0 siblings, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-17 20:58 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com Alexander Graf wrote: >> >> I can't think why it was done for writes. Normally, a guest page >> fault would be trapped and reflected a long time before emulation, in >> FNAME(page_fault)(), after walk_addr(). >> >> Can you give some details on the situation? What instruction was >> executed, and why kvm tried to emulate it? > > I remember it was something about accessing the apic with npt. Maybe > the real problem was the restricted bit checking that made the > emulated instruction behave differently from the real mmu. The apic should not be mapped by Hyper-V's shadow page tables, so this should have been handled by page_fault(). -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 3/6] Emulator: Inject #PF when page was not found 2009-05-17 20:58 ` Avi Kivity @ 2009-05-18 12:55 ` Alexander Graf 0 siblings, 0 replies; 41+ messages in thread From: Alexander Graf @ 2009-05-18 12:55 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm@vger.kernel.org, joerg.roedel@amd.com On 17.05.2009, at 22:58, Avi Kivity wrote: > Alexander Graf wrote: >>> >>> I can't think why it was done for writes. Normally, a guest page >>> fault would be trapped and reflected a long time before emulation, >>> in FNAME(page_fault)(), after walk_addr(). >>> >>> Can you give some details on the situation? What instruction was >>> executed, and why kvm tried to emulate it? >> >> I remember it was something about accessing the apic with npt. >> Maybe the real problem was the restricted bit checking that made >> the emulated instruction behave differently from the real mmu. > > The apic should not be mapped by Hyper-V's shadow page tables, so > this should have been handled by page_fault(). I think I only had to include this to find out that the restricted bit was checked for, so I got a blue screen in the guest :-). Hyper-V works fine without this patch on NPT. Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 2/6] MMU: don't bail on PAT bits in PTE 2009-05-15 8:22 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Alexander Graf 2009-05-15 8:22 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Alexander Graf @ 2009-05-15 10:25 ` Michael S. Tsirkin 2009-05-15 10:53 ` Alexander Graf 1 sibling, 1 reply; 41+ messages in thread From: Michael S. Tsirkin @ 2009-05-15 10:25 UTC (permalink / raw) To: Alexander Graf; +Cc: kvm, joerg.roedel On Fri, May 15, 2009 at 10:22:16AM +0200, Alexander Graf wrote: > A 64bit PTE can have bit7 set to 1 which means "Use this bit for the PAT". > Currently KVM's MMU code treats this bit as reserved, even though it's not. > > As long as we're not required to make use of the PAT bits which is only > required for DMA/MMIO from my understanding, we can safely ignore it. > > Hyper-V uses this bit for kernel PTEs. > > Signed-off-by: Alexander Graf <agraf@suse.de> > --- > arch/x86/kvm/mmu.c | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 8fcdae9..cce055a 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2169,7 +2169,7 @@ static void reset_rsvds_bits_mask(struct kvm_vcpu *vcpu, int level) > context->rsvd_bits_mask[1][1] = exb_bit_rsvd | > rsvd_bits(maxphyaddr, 51) | > rsvd_bits(13, 20); /* large page */ > - context->rsvd_bits_mask[1][0] = ~0ull; > + context->rsvd_bits_mask[1][0] = 0ull; > break; > } > } Just to make sure I understand what this does: if guest sets bit7, will bit7 get set in shadow PTEs as well? -- MST ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 2/6] MMU: don't bail on PAT bits in PTE 2009-05-15 10:25 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Michael S. Tsirkin @ 2009-05-15 10:53 ` Alexander Graf 2009-05-15 13:19 ` Joerg Roedel 0 siblings, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-15 10:53 UTC (permalink / raw) To: Michael S. Tsirkin; +Cc: kvm, joerg.roedel On 15.05.2009, at 12:25, Michael S. Tsirkin wrote: > On Fri, May 15, 2009 at 10:22:16AM +0200, Alexander Graf wrote: >> A 64bit PTE can have bit7 set to 1 which means "Use this bit for >> the PAT". >> Currently KVM's MMU code treats this bit as reserved, even though >> it's not. >> >> As long as we're not required to make use of the PAT bits which is >> only >> required for DMA/MMIO from my understanding, we can safely ignore it. >> >> Hyper-V uses this bit for kernel PTEs. >> >> Signed-off-by: Alexander Graf <agraf@suse.de> >> --- >> arch/x86/kvm/mmu.c | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 8fcdae9..cce055a 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -2169,7 +2169,7 @@ static void reset_rsvds_bits_mask(struct >> kvm_vcpu *vcpu, int level) >> context->rsvd_bits_mask[1][1] = exb_bit_rsvd | >> rsvd_bits(maxphyaddr, 51) | >> rsvd_bits(13, 20); /* large page */ >> - context->rsvd_bits_mask[1][0] = ~0ull; >> + context->rsvd_bits_mask[1][0] = 0ull; >> break; >> } >> } > > Just to make sure I understand what this does: if guest sets bit7, > will > bit7 get set in shadow PTEs as well? I don't see any code that interprets bit7, so the shadow PTE should be completely unaffected. But to be sure I asked Jörg to take a look at it as well, as he's more familiar with the x86 SPT code than I am :-). Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 2/6] MMU: don't bail on PAT bits in PTE 2009-05-15 10:53 ` Alexander Graf @ 2009-05-15 13:19 ` Joerg Roedel 2009-05-17 9:51 ` Avi Kivity 0 siblings, 1 reply; 41+ messages in thread From: Joerg Roedel @ 2009-05-15 13:19 UTC (permalink / raw) To: Alexander Graf; +Cc: Michael S. Tsirkin, kvm On Fri, May 15, 2009 at 12:53:42PM +0200, Alexander Graf wrote: > > On 15.05.2009, at 12:25, Michael S. Tsirkin wrote: > >> On Fri, May 15, 2009 at 10:22:16AM +0200, Alexander Graf wrote: >>> A 64bit PTE can have bit7 set to 1 which means "Use this bit for the >>> PAT". >>> Currently KVM's MMU code treats this bit as reserved, even though >>> it's not. >>> >>> As long as we're not required to make use of the PAT bits which is >>> only >>> required for DMA/MMIO from my understanding, we can safely ignore it. >>> >>> Hyper-V uses this bit for kernel PTEs. >>> >>> Signed-off-by: Alexander Graf <agraf@suse.de> >>> --- >>> arch/x86/kvm/mmu.c | 2 +- >>> 1 files changed, 1 insertions(+), 1 deletions(-) >>> >>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >>> index 8fcdae9..cce055a 100644 >>> --- a/arch/x86/kvm/mmu.c >>> +++ b/arch/x86/kvm/mmu.c >>> @@ -2169,7 +2169,7 @@ static void reset_rsvds_bits_mask(struct >>> kvm_vcpu *vcpu, int level) >>> context->rsvd_bits_mask[1][1] = exb_bit_rsvd | >>> rsvd_bits(maxphyaddr, 51) | >>> rsvd_bits(13, 20); /* large page */ >>> - context->rsvd_bits_mask[1][0] = ~0ull; >>> + context->rsvd_bits_mask[1][0] = 0ull; >>> break; >>> } >>> } >> >> Just to make sure I understand what this does: if guest sets bit7, >> will >> bit7 get set in shadow PTEs as well? > > I don't see any code that interprets bit7, so the shadow PTE should be > completely unaffected. > > But to be sure I asked Jörg to take a look at it as well, as he's more > familiar with the x86 SPT code than I am :-). The PAT bit is not propagated into the shadow page tables. Anyway, the problem is fixed the wrong way in this patch. The real problem is that a 4kb pte is checked with mask considered for large pages (which do not exist on walker level 0). The attached patch fixes it the better way imho. From 7530aef3ed580b70a74224f8c04857754501c496 Mon Sep 17 00:00:00 2001 From: Joerg Roedel <joerg.roedel@amd.com> Date: Fri, 15 May 2009 15:14:19 +0200 Subject: [PATCH] kvm/mmu: fix reserved bit checking on 4kb pte level The reserved bits checking code looks at bit 7 of the pte to determine if it has to use the mask for a large pte or a normal pde. This does not work on 4kb pte level because bit 7 is used there for PAT. Account this in the checking function. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> --- arch/x86/kvm/mmu.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 479e748..8d9552e 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -2124,9 +2124,11 @@ static void paging_free(struct kvm_vcpu *vcpu) static bool is_rsvd_bits_set(struct kvm_vcpu *vcpu, u64 gpte, int level) { - int bit7; + int bit7 = 0; + + if (level != PT_PAGE_TABLE_LEVEL) + bit7 = (gpte >> 7) & 1; - bit7 = (gpte >> 7) & 1; return (gpte & vcpu->arch.mmu.rsvd_bits_mask[bit7][level-1]) != 0; } -- 1.6.2.4 -- | Advanced Micro Devices GmbH Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei München System | Research | Geschäftsführer: Thomas M. McCoy, Giuliano Meroni Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis München | Registergericht München, HRB Nr. 43632 ^ permalink raw reply related [flat|nested] 41+ messages in thread
* Re: [PATCH 2/6] MMU: don't bail on PAT bits in PTE 2009-05-15 13:19 ` Joerg Roedel @ 2009-05-17 9:51 ` Avi Kivity 0 siblings, 0 replies; 41+ messages in thread From: Avi Kivity @ 2009-05-17 9:51 UTC (permalink / raw) To: Joerg Roedel; +Cc: Alexander Graf, Michael S. Tsirkin, kvm Joerg Roedel wrote: > Subject: [PATCH] kvm/mmu: fix reserved bit checking on 4kb pte level > > The reserved bits checking code looks at bit 7 of the pte to determine > if it has to use the mask for a large pte or a normal pde. This does not > work on 4kb pte level because bit 7 is used there for PAT. Account this > in the checking function. > > > static bool is_rsvd_bits_set(struct kvm_vcpu *vcpu, u64 gpte, int level) > { > - int bit7; > + int bit7 = 0; > + > + if (level != PT_PAGE_TABLE_LEVEL) > + bit7 = (gpte >> 7) & 1; > > - bit7 = (gpte >> 7) & 1; > return (gpte & vcpu->arch.mmu.rsvd_bits_mask[bit7][level-1]) != 0; > } > > If we make rsvd_bits_mask[1][0] == rsvd_bits_mask[0][0], we don't need the extra check. That's why it is named bit7 and not pse (need to make sure bit 7 is not reserved in this case). -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-15 8:22 [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf 2009-05-15 8:22 ` [PATCH 1/6] Add definition for IGNNE MSR Alexander Graf @ 2009-05-15 10:47 ` Alexander Graf 2009-05-17 21:08 ` Avi Kivity 1 sibling, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-15 10:47 UTC (permalink / raw) To: KVM list; +Cc: Joerg Roedel On 15.05.2009, at 10:22, Alexander Graf wrote: > Now that we have nested SVM in place, let's make use of it and > virtualize > something non-kvm. > The first interesting target that came to my mind here was Hyper-V. > > This patchset makes Windows Server 2008 boot with Hyper-V, which runs > the "dom0" in virtualized mode already. I haven't been able to run a > second VM within for now though, but maybe I just wasn't patient > enough ;-). In order to find out why things were slow with nested SVM I hacked intercept reporting into debugfs in my local tree and found pretty interesting results (using NPT): SVM_EXIT_CLGI 3888080 0 SVM_EXIT_CPUID 3460 0 SVM_EXIT_CR0_SEL_WRI 0 0 SVM_EXIT_ERR 0 0 SVM_EXIT_FERR_FREEZE 0 0 SVM_EXIT_GDTR_READ 0 0 SVM_EXIT_GDTR_WRITE 0 0 SVM_EXIT_HLT 40186 0 SVM_EXIT_ICEBP 0 0 SVM_EXIT_IDTR_READ 0 0 SVM_EXIT_IDTR_WRITE 0 0 SVM_EXIT_INIT 0 0 SVM_EXIT_INTR 193173 0 SVM_EXIT_INVD 0 0 SVM_EXIT_INVLPG 1 0 SVM_EXIT_INVLPGA 536994 0 SVM_EXIT_IOIO 3450484 0 SVM_EXIT_IRET 0 0 SVM_EXIT_LDTR_READ 0 0 SVM_EXIT_LDTR_WRITE 0 0 SVM_EXIT_MONITOR 0 0 SVM_EXIT_MSR 124614 0 SVM_EXIT_MWAIT 0 0 SVM_EXIT_MWAIT_COND 0 0 SVM_EXIT_NMI 0 0 SVM_EXIT_NPF 1040416 0 SVM_EXIT_PAUSE 0 0 SVM_EXIT_POPF 0 0 SVM_EXIT_PUSHF 0 0 SVM_EXIT_RDPMC 0 0 SVM_EXIT_RDTSC 0 0 SVM_EXIT_RDTSCP 0 0 SVM_EXIT_RSM 0 0 SVM_EXIT_SHUTDOWN 0 0 SVM_EXIT_SKINIT 0 0 SVM_EXIT_SMI 20 0 SVM_EXIT_STGI 3888080 0 SVM_EXIT_SWINT 0 0 SVM_EXIT_TASK_SWITCH 0 0 SVM_EXIT_TR_READ 0 0 SVM_EXIT_TR_WRITE 0 0 SVM_EXIT_VINTR 402865 0 SVM_EXIT_VMLOAD 3888096 0 SVM_EXIT_VMMCALL 767288 0 SVM_EXIT_VMRUN 3888096 0 SVM_EXIT_VMSAVE 3888096 0 SVM_EXIT_WBINVD 64 0 So apparently the most intercepts come from the SVM helper calls (clgi, stgi, vmload, vmsave). I guess I need to get back to the "emulate when GIF=0" approach to get things fast. Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-15 10:47 ` [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf @ 2009-05-17 21:08 ` Avi Kivity 2009-05-18 12:45 ` Alexander Graf 0 siblings, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-17 21:08 UTC (permalink / raw) To: Alexander Graf; +Cc: KVM list, Joerg Roedel Alexander Graf wrote: > In order to find out why things were slow with nested SVM I hacked > intercept reporting into debugfs in my local tree and found pretty > interesting results (using NPT): > > [...] > So apparently the most intercepts come from the SVM helper calls > (clgi, stgi, vmload, vmsave). I guess I need to get back to the > "emulate when GIF=0" approach to get things fast. There's only a limited potential here (a factor of three, reducing 6 exits to 2, less the emulation overhead). There's a lot more to be gained from nested npt, since you'll avoid most of the original exits in the first place. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-17 21:08 ` Avi Kivity @ 2009-05-18 12:45 ` Alexander Graf 2009-05-18 13:29 ` Avi Kivity 0 siblings, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-18 12:45 UTC (permalink / raw) To: Avi Kivity; +Cc: KVM list, Joerg Roedel On 17.05.2009, at 23:08, Avi Kivity wrote: > Alexander Graf wrote: >> In order to find out why things were slow with nested SVM I hacked >> intercept reporting into debugfs in my local tree and found pretty >> interesting results (using NPT): >> >> > [...] > >> So apparently the most intercepts come from the SVM helper calls >> (clgi, stgi, vmload, vmsave). I guess I need to get back to the >> "emulate when GIF=0" approach to get things fast. > > There's only a limited potential here (a factor of three, reducing 6 > exits to 2, less the emulation overhead). There's a lot more to be > gained from nested npt, since you'll avoid most of the original > exits in the first place. I think the reversed is the case. Look at those numbers (w2k8 bootup): http://pastebin.ca/1423596 The only thing nested NPT would achieve is a reduction of #NPF exits. But they are absolutely in the minority today already. Normal #PF's do get directly passed to the guest already. Of course, this all depends on the workload. For kernbench style benchmarks nested NPT probably gives you a bigger win, but anything doing IO is slowed down way more than it has to now. Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 12:45 ` Alexander Graf @ 2009-05-18 13:29 ` Avi Kivity 2009-05-18 13:35 ` Alexander Graf 2009-05-18 15:15 ` Alexander Graf 0 siblings, 2 replies; 41+ messages in thread From: Avi Kivity @ 2009-05-18 13:29 UTC (permalink / raw) To: Alexander Graf; +Cc: KVM list, Joerg Roedel Alexander Graf wrote: >> >> There's only a limited potential here (a factor of three, reducing 6 >> exits to 2, less the emulation overhead). There's a lot more to be >> gained from nested npt, since you'll avoid most of the original exits >> in the first place. > > I think the reversed is the case. Look at those numbers (w2k8 bootup): > > http://pastebin.ca/1423596 > > The only thing nested NPT would achieve is a reduction of #NPF exits. > But they are absolutely in the minority today already. Normal #PF's do > get directly passed to the guest already. #NPF exits are caused when guest/host mappings change, which they don't, or by mmio (which happens both for guest and nguest). I don't understand how you can pass #PFs directly to the guest. Surely the guest has enabled pagefault interception, and you need to set up its vmcb? > > Of course, this all depends on the workload. For kernbench style > benchmarks nested NPT probably gives you a bigger win, but anything > doing IO is slowed down way more than it has to now. What is causing 17K pio exits/sec? What port numbers? -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 13:29 ` Avi Kivity @ 2009-05-18 13:35 ` Alexander Graf 2009-05-18 13:44 ` Avi Kivity 2009-05-18 15:15 ` Alexander Graf 1 sibling, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-18 13:35 UTC (permalink / raw) To: Avi Kivity; +Cc: KVM list, Joerg Roedel On 18.05.2009, at 15:29, Avi Kivity wrote: > Alexander Graf wrote: >>> >>> There's only a limited potential here (a factor of three, reducing >>> 6 exits to 2, less the emulation overhead). There's a lot more to >>> be gained from nested npt, since you'll avoid most of the original >>> exits in the first place. >> >> I think the reversed is the case. Look at those numbers (w2k8 >> bootup): >> >> http://pastebin.ca/1423596 >> >> The only thing nested NPT would achieve is a reduction of #NPF >> exits. But they are absolutely in the minority today already. >> Normal #PF's do get directly passed to the guest already. > > #NPF exits are caused when guest/host mappings change, which they > don't, or by mmio (which happens both for guest and nguest). > > I don't understand how you can pass #PFs directly to the guest. > Surely the guest has enabled pagefault interception, and you need to > set up its vmcb? Ugh - looks like I totally forgot to include #PF exits in my stats, which is why I didn't see them. > >> >> Of course, this all depends on the workload. For kernbench style >> benchmarks nested NPT probably gives you a bigger win, but anything >> doing IO is slowed down way more than it has to now. > > What is causing 17K pio exits/sec? What port numbers? Any hints on how to easily find that out? For someone who's too stupid to get kvmtrace working :-). Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 13:35 ` Alexander Graf @ 2009-05-18 13:44 ` Avi Kivity 0 siblings, 0 replies; 41+ messages in thread From: Avi Kivity @ 2009-05-18 13:44 UTC (permalink / raw) To: Alexander Graf; +Cc: KVM list, Joerg Roedel Alexander Graf wrote: >>> Of course, this all depends on the workload. For kernbench style >>> benchmarks nested NPT probably gives you a bigger win, but anything >>> doing IO is slowed down way more than it has to now. >> >> What is causing 17K pio exits/sec? What port numbers? > > Any hints on how to easily find that out? For someone who's too stupid > to get kvmtrace working :-). You can always printk() every 1000 loops, but kvmtrace is actually pretty easy to use. Compile it in, run your guest (pinning to one cpu deconfuses the output), run kvmtrace -o blah, then use './kvmtrace_format formats' as a filter on the binary output. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 13:29 ` Avi Kivity 2009-05-18 13:35 ` Alexander Graf @ 2009-05-18 15:15 ` Alexander Graf 2009-05-18 15:20 ` Avi Kivity 1 sibling, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-18 15:15 UTC (permalink / raw) To: Avi Kivity; +Cc: KVM list, Joerg Roedel On 18.05.2009, at 15:29, Avi Kivity wrote: > Alexander Graf wrote: >>> >>> There's only a limited potential here (a factor of three, reducing >>> 6 exits to 2, less the emulation overhead). There's a lot more to >>> be gained from nested npt, since you'll avoid most of the original >>> exits in the first place. >> >> I think the reversed is the case. Look at those numbers (w2k8 >> bootup): >> >> http://pastebin.ca/1423596 >> >> The only thing nested NPT would achieve is a reduction of #NPF >> exits. But they are absolutely in the minority today already. >> Normal #PF's do get directly passed to the guest already. > > #NPF exits are caused when guest/host mappings change, which they > don't, or by mmio (which happens both for guest and nguest). > > I don't understand how you can pass #PFs directly to the guest. > Surely the guest has enabled pagefault interception, and you need to > set up its vmcb? I guess you're right: http://pastebin.ca/1426458 Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 15:15 ` Alexander Graf @ 2009-05-18 15:20 ` Avi Kivity 2009-05-18 15:24 ` Alexander Graf 0 siblings, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-18 15:20 UTC (permalink / raw) To: Alexander Graf; +Cc: KVM list, Joerg Roedel Alexander Graf wrote: > > On 18.05.2009, at 15:29, Avi Kivity wrote: > >> Alexander Graf wrote: >>>> >>>> There's only a limited potential here (a factor of three, reducing >>>> 6 exits to 2, less the emulation overhead). There's a lot more to >>>> be gained from nested npt, since you'll avoid most of the original >>>> exits in the first place. >>> >>> I think the reversed is the case. Look at those numbers (w2k8 bootup): >>> >>> http://pastebin.ca/1423596 >>> >>> The only thing nested NPT would achieve is a reduction of #NPF >>> exits. But they are absolutely in the minority today already. Normal >>> #PF's do get directly passed to the guest already. >> >> #NPF exits are caused when guest/host mappings change, which they >> don't, or by mmio (which happens both for guest and nguest). >> >> I don't understand how you can pass #PFs directly to the guest. >> Surely the guest has enabled pagefault interception, and you need to >> set up its vmcb? > > I guess you're right: http://pastebin.ca/1426458 Any idea where the ioio exits come from? If it's IDE, we can eliminate them by using virtio. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 15:20 ` Avi Kivity @ 2009-05-18 15:24 ` Alexander Graf 2009-05-18 15:28 ` Avi Kivity 0 siblings, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-18 15:24 UTC (permalink / raw) To: Avi Kivity; +Cc: KVM list, Joerg Roedel On 18.05.2009, at 17:20, Avi Kivity wrote: > Alexander Graf wrote: >> >> On 18.05.2009, at 15:29, Avi Kivity wrote: >> >>> Alexander Graf wrote: >>>>> >>>>> There's only a limited potential here (a factor of three, >>>>> reducing 6 exits to 2, less the emulation overhead). There's a >>>>> lot more to be gained from nested npt, since you'll avoid most >>>>> of the original exits in the first place. >>>> >>>> I think the reversed is the case. Look at those numbers (w2k8 >>>> bootup): >>>> >>>> http://pastebin.ca/1423596 >>>> >>>> The only thing nested NPT would achieve is a reduction of #NPF >>>> exits. But they are absolutely in the minority today already. >>>> Normal #PF's do get directly passed to the guest already. >>> >>> #NPF exits are caused when guest/host mappings change, which they >>> don't, or by mmio (which happens both for guest and nguest). >>> >>> I don't understand how you can pass #PFs directly to the guest. >>> Surely the guest has enabled pagefault interception, and you need >>> to set up its vmcb? >> >> I guess you're right: http://pastebin.ca/1426458 > > Any idea where the ioio exits come from? If it's IDE, we can > eliminate them by using virtio. I'm still not getting kvmtrace to work. ./kvmtrace -o log only gives me empty files (4 bytes each), even though I did ./configure --with- kvm-trace in the kernel dir. Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 15:24 ` Alexander Graf @ 2009-05-18 15:28 ` Avi Kivity 2009-05-18 15:32 ` Alexander Graf 0 siblings, 1 reply; 41+ messages in thread From: Avi Kivity @ 2009-05-18 15:28 UTC (permalink / raw) To: Alexander Graf; +Cc: KVM list, Joerg Roedel Alexander Graf wrote: > > I'm still not getting kvmtrace to work. ./kvmtrace -o log only gives > me empty files (4 bytes each), even though I did ./configure > --with-kvm-trace in the kernel dir. Which kernel dir? kvm-kmod? it worked for me there. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 15:28 ` Avi Kivity @ 2009-05-18 15:32 ` Alexander Graf 2009-05-18 15:35 ` Avi Kivity 0 siblings, 1 reply; 41+ messages in thread From: Alexander Graf @ 2009-05-18 15:32 UTC (permalink / raw) To: Avi Kivity; +Cc: KVM list, Joerg Roedel On 18.05.2009, at 17:28, Avi Kivity wrote: > Alexander Graf wrote: >> >> I'm still not getting kvmtrace to work. ./kvmtrace -o log only >> gives me empty files (4 bytes each), even though I did ./configure >> --with-kvm-trace in the kernel dir. > > Which kernel dir? kvm-kmod? it worked for me there. Have things changed again? I used the kvm/kernel directory from qemu- kvm.git. Are the any dependencies on the host kernel I might not have fulfilled? Alex ^ permalink raw reply [flat|nested] 41+ messages in thread
* Re: [PATCH 0/6] Add rudimentary Hyper-V guest support 2009-05-18 15:32 ` Alexander Graf @ 2009-05-18 15:35 ` Avi Kivity 0 siblings, 0 replies; 41+ messages in thread From: Avi Kivity @ 2009-05-18 15:35 UTC (permalink / raw) To: Alexander Graf; +Cc: KVM list, Joerg Roedel Alexander Graf wrote: > > On 18.05.2009, at 17:28, Avi Kivity wrote: > >> Alexander Graf wrote: >>> >>> I'm still not getting kvmtrace to work. ./kvmtrace -o log only gives >>> me empty files (4 bytes each), even though I did ./configure >>> --with-kvm-trace in the kernel dir. >> >> Which kernel dir? kvm-kmod? it worked for me there. > > Have things changed again? I used the kvm/kernel directory from > qemu-kvm.git. They're much simpler. clone kvm-kmod.git (from same directory), 'git submodule update --init', you'll get linux-2.6 underneath. ./configure && make sync && make. > > Are the any dependencies on the host kernel I might not have fulfilled? > CONFIG_MARKERS. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 41+ messages in thread
end of thread, other threads:[~2009-05-18 18:47 UTC | newest] Thread overview: 41+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-05-15 8:22 [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf 2009-05-15 8:22 ` [PATCH 1/6] Add definition for IGNNE MSR Alexander Graf 2009-05-15 8:22 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Alexander Graf 2009-05-15 8:22 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Alexander Graf 2009-05-15 8:22 ` [PATCH 4/6] Implement Hyper-V MSRs Alexander Graf 2009-05-15 8:22 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Alexander Graf 2009-05-15 8:22 ` [PATCH 6/6] Nested SVM: Improve interrupt injection Alexander Graf 2009-05-17 6:48 ` Gleb Natapov 2009-05-17 8:10 ` Alexander Graf 2009-05-18 11:47 ` Alexander Graf 2009-05-15 13:43 ` [PATCH 5/6] Nested SVM: Implement INVLPGA Joerg Roedel 2009-05-17 20:02 ` Avi Kivity 2009-05-17 20:03 ` Avi Kivity 2009-05-18 18:46 ` Marcelo Tosatti 2009-05-18 13:00 ` Alexander Graf 2009-05-17 9:54 ` [PATCH 4/6] Implement Hyper-V MSRs Avi Kivity 2009-05-17 19:57 ` Alexander Graf 2009-05-17 20:00 ` Avi Kivity 2009-05-17 20:27 ` Alexander Graf 2009-05-17 20:37 ` Alexander Graf 2009-05-15 13:40 ` [PATCH 3/6] Emulator: Inject #PF when page was not found Joerg Roedel 2009-05-17 19:59 ` Avi Kivity 2009-05-17 20:25 ` Alexander Graf 2009-05-17 20:58 ` Avi Kivity 2009-05-18 12:55 ` Alexander Graf 2009-05-15 10:25 ` [PATCH 2/6] MMU: don't bail on PAT bits in PTE Michael S. Tsirkin 2009-05-15 10:53 ` Alexander Graf 2009-05-15 13:19 ` Joerg Roedel 2009-05-17 9:51 ` Avi Kivity 2009-05-15 10:47 ` [PATCH 0/6] Add rudimentary Hyper-V guest support Alexander Graf 2009-05-17 21:08 ` Avi Kivity 2009-05-18 12:45 ` Alexander Graf 2009-05-18 13:29 ` Avi Kivity 2009-05-18 13:35 ` Alexander Graf 2009-05-18 13:44 ` Avi Kivity 2009-05-18 15:15 ` Alexander Graf 2009-05-18 15:20 ` Avi Kivity 2009-05-18 15:24 ` Alexander Graf 2009-05-18 15:28 ` Avi Kivity 2009-05-18 15:32 ` Alexander Graf 2009-05-18 15:35 ` Avi Kivity
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox