* [PATCH 0/6] Nonatomic interrupt injection
@ 2010-07-27 13:19 Avi Kivity
2010-07-27 13:19 ` [PATCH 1/6] KVM: Check for pending events before attempting injection Avi Kivity
` (5 more replies)
0 siblings, 6 replies; 16+ messages in thread
From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw)
To: Marcelo Tosatti, kvm
This patchset changes interrupt injection to be done from normal process
context instead of interrupts disabled context. This is useful for real
mode interrupt injection on Intel without the current hacks (injecting as
a software interrupt of a vm86 task), reducing latencies, and later, for
allowing nested virtualization code to use kvm_read_guest()/kvm_write_guest()
instead of kmap() to access the guest vmcb/vmcs.
Seems to survive a hack that cancels every 16th entry, after injection has
already taken place.
Note: not yet ready, hangs fairly early on AMD. Seems to work well on Intel.
Please review carefully, esp. the first patch. Any missing kvm_make_request()
there may result in a hung guest.
v3: close new race between injection and entry
fix Intel real-mode injection cancellation
v2: svm support (easier than expected)
fix silly vmx warning
Avi Kivity (6):
KVM: Check for pending events before attempting injection
KVM: VMX: Split up vmx_complete_interrupts()
KVM: VMX: Move real-mode interrupt injection fixup to
vmx_complete_interrupts()
KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and
entry
KVM: Non-atomic interrupt injection
KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/i8259.c | 1 +
arch/x86/kvm/lapic.c | 12 +++-
arch/x86/kvm/svm.c | 20 ++++++-
arch/x86/kvm/vmx.c | 116 ++++++++++++++++++++++++++------------
arch/x86/kvm/x86.c | 44 ++++++++++----
include/linux/kvm_host.h | 1 +
7 files changed, 142 insertions(+), 53 deletions(-)
^ permalink raw reply [flat|nested] 16+ messages in thread* [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity @ 2010-07-27 13:19 ` Avi Kivity 2010-07-28 16:21 ` Marcelo Tosatti 2010-07-29 6:51 ` Gleb Natapov 2010-07-27 13:19 ` [PATCH 2/6] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity ` (4 subsequent siblings) 5 siblings, 2 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw) To: Marcelo Tosatti, kvm Instead of blindly attempting to inject an event before each guest entry, check for a possible event first in vcpu->requests. Sites that can trigger event injection are modified to set KVM_REQ_EVENT: - interrupt, nmi window opening - ppr updates - i8259 output changes - local apic irr changes - rflags updates - gif flag set - event set on exit This improves non-injecting entry performance, and sets the stage for non-atomic injection. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/kvm/i8259.c | 1 + arch/x86/kvm/lapic.c | 12 ++++++++++-- arch/x86/kvm/svm.c | 8 +++++++- arch/x86/kvm/vmx.c | 6 ++++++ arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- include/linux/kvm_host.h | 1 + 6 files changed, 51 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c index 8d10c06..9f7ab44 100644 --- a/arch/x86/kvm/i8259.c +++ b/arch/x86/kvm/i8259.c @@ -64,6 +64,7 @@ static void pic_unlock(struct kvm_pic *s) if (!found) found = s->kvm->bsp_vcpu; + kvm_make_request(KVM_REQ_EVENT, found); kvm_vcpu_kick(found); } } diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index 77d8c0f..e83d203 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -259,9 +259,10 @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic) static void apic_update_ppr(struct kvm_lapic *apic) { - u32 tpr, isrv, ppr; + u32 tpr, isrv, ppr, old_ppr; int isr; + old_ppr = apic_get_reg(apic, APIC_PROCPRI); tpr = apic_get_reg(apic, APIC_TASKPRI); isr = apic_find_highest_isr(apic); isrv = (isr != -1) ? isr : 0; @@ -274,7 +275,10 @@ static void apic_update_ppr(struct kvm_lapic *apic) apic_debug("vlapic %p, ppr 0x%x, isr 0x%x, isrv 0x%x", apic, ppr, isr, isrv); - apic_set_reg(apic, APIC_PROCPRI, ppr); + if (old_ppr != ppr) { + apic_set_reg(apic, APIC_PROCPRI, ppr); + kvm_make_request(KVM_REQ_EVENT, apic->vcpu); + } } static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr) @@ -391,6 +395,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, break; } + kvm_make_request(KVM_REQ_EVENT, vcpu); kvm_vcpu_kick(vcpu); break; @@ -416,6 +421,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, "INIT on a runnable vcpu %d\n", vcpu->vcpu_id); vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED; + kvm_make_request(KVM_REQ_EVENT, vcpu); kvm_vcpu_kick(vcpu); } else { apic_debug("Ignoring de-assert INIT to vcpu %d\n", @@ -430,6 +436,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, result = 1; vcpu->arch.sipi_vector = vector; vcpu->arch.mp_state = KVM_MP_STATE_SIPI_RECEIVED; + kvm_make_request(KVM_REQ_EVENT, vcpu); kvm_vcpu_kick(vcpu); } break; @@ -475,6 +482,7 @@ static void apic_set_eoi(struct kvm_lapic *apic) trigger_mode = IOAPIC_EDGE_TRIG; if (!(apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI)) kvm_ioapic_update_eoi(apic->vcpu->kvm, vector, trigger_mode); + kvm_make_request(KVM_REQ_EVENT, apic->vcpu); } static void apic_send_ipi(struct kvm_lapic *apic) diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index 56c9b6b..a51e067 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -2258,6 +2258,7 @@ static int stgi_interception(struct vcpu_svm *svm) svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; skip_emulated_instruction(&svm->vcpu); + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); enable_gif(svm); @@ -2644,6 +2645,7 @@ static int interrupt_window_interception(struct vcpu_svm *svm) { struct kvm_run *kvm_run = svm->vcpu.run; + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); svm_clear_vintr(svm); svm->vmcb->control.int_ctl &= ~V_IRQ_MASK; /* @@ -3089,8 +3091,10 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) svm->int3_injected = 0; - if (svm->vcpu.arch.hflags & HF_IRET_MASK) + if (svm->vcpu.arch.hflags & HF_IRET_MASK) { svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK); + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); + } svm->vcpu.arch.nmi_injected = false; kvm_clear_exception_queue(&svm->vcpu); @@ -3099,6 +3103,8 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) if (!(exitintinfo & SVM_EXITINTINFO_VALID)) return; + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); + vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK; type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK; diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 2fdcc98..d8edfe3 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3348,6 +3348,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu) static int handle_tpr_below_threshold(struct kvm_vcpu *vcpu) { + kvm_make_request(KVM_REQ_EVENT, vcpu); return 1; } @@ -3360,6 +3361,8 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu) cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_INTR_PENDING; vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control); + kvm_make_request(KVM_REQ_EVENT, vcpu); + ++vcpu->stat.irq_window_exits; /* @@ -3616,6 +3619,7 @@ static int handle_nmi_window(struct kvm_vcpu *vcpu) cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING; vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control); ++vcpu->stat.nmi_window_exits; + kvm_make_request(KVM_REQ_EVENT, vcpu); return 1; } @@ -3849,6 +3853,8 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) if (!idtv_info_valid) return; + kvm_make_request(KVM_REQ_EVENT, &vmx->vcpu); + vector = idt_vectoring_info & VECTORING_INFO_VECTOR_MASK; type = idt_vectoring_info & VECTORING_INFO_TYPE_MASK; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 76fbc32..38e91b6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -282,6 +282,8 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, u32 prev_nr; int class1, class2; + kvm_make_request(KVM_REQ_EVENT, vcpu); + if (!vcpu->arch.exception.pending) { queue: vcpu->arch.exception.pending = true; @@ -337,6 +339,7 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr, void kvm_inject_nmi(struct kvm_vcpu *vcpu) { + kvm_make_request(KVM_REQ_EVENT, vcpu); vcpu->arch.nmi_pending = 1; } EXPORT_SYMBOL_GPL(kvm_inject_nmi); @@ -2356,6 +2359,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR) vcpu->arch.sipi_vector = events->sipi_vector; + kvm_make_request(KVM_REQ_EVENT, vcpu); + return 0; } @@ -4059,6 +4064,7 @@ restart: toggle_interruptibility(vcpu, vcpu->arch.emulate_ctxt.interruptibility); kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags); + kvm_make_request(KVM_REQ_EVENT, vcpu); memcpy(vcpu->arch.regs, c->regs, sizeof c->regs); kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip); @@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) goto out; } - inject_pending_event(vcpu); + if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { + inject_pending_event(vcpu); - /* enable NMI/IRQ window open exits if needed */ - if (vcpu->arch.nmi_pending) - kvm_x86_ops->enable_nmi_window(vcpu); - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) - kvm_x86_ops->enable_irq_window(vcpu); + /* enable NMI/IRQ window open exits if needed */ + if (vcpu->arch.nmi_pending) + kvm_x86_ops->enable_nmi_window(vcpu); + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) + kvm_x86_ops->enable_irq_window(vcpu); - if (kvm_lapic_enabled(vcpu)) { - update_cr8_intercept(vcpu); - kvm_lapic_sync_to_vapic(vcpu); + if (kvm_lapic_enabled(vcpu)) { + update_cr8_intercept(vcpu); + kvm_lapic_sync_to_vapic(vcpu); + } } srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); @@ -4980,6 +4988,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) vcpu->arch.exception.pending = false; + kvm_make_request(KVM_REQ_EVENT, vcpu); + return 0; } @@ -5043,6 +5053,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, struct kvm_mp_state *mp_state) { vcpu->arch.mp_state = mp_state->mp_state; + kvm_make_request(KVM_REQ_EVENT, vcpu); return 0; } @@ -5077,6 +5088,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason, memcpy(vcpu->arch.regs, c->regs, sizeof c->regs); kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip); kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags); + kvm_make_request(KVM_REQ_EVENT, vcpu); return EMULATE_DONE; } EXPORT_SYMBOL_GPL(kvm_task_switch); @@ -5147,6 +5159,8 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, !is_protmode(vcpu)) vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; + kvm_make_request(KVM_REQ_EVENT, vcpu); + return 0; } @@ -5375,6 +5389,8 @@ int kvm_arch_vcpu_reset(struct kvm_vcpu *vcpu) vcpu->arch.dr6 = DR6_FIXED_1; vcpu->arch.dr7 = DR7_FIXED_1; + kvm_make_request(KVM_REQ_EVENT, vcpu); + return kvm_x86_ops->vcpu_reset(vcpu); } @@ -5683,6 +5699,7 @@ void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip)) rflags |= X86_EFLAGS_TF; kvm_x86_ops->set_rflags(vcpu, rflags); + kvm_make_request(KVM_REQ_EVENT, vcpu); } EXPORT_SYMBOL_GPL(kvm_set_rflags); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c13cc48..e41e66b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -39,6 +39,7 @@ #define KVM_REQ_KVMCLOCK_UPDATE 8 #define KVM_REQ_KICK 9 #define KVM_REQ_DEACTIVATE_FPU 10 +#define KVM_REQ_EVENT 11 #define KVM_USERSPACE_IRQ_SOURCE_ID 0 -- 1.7.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-27 13:19 ` [PATCH 1/6] KVM: Check for pending events before attempting injection Avi Kivity @ 2010-07-28 16:21 ` Marcelo Tosatti 2010-07-28 16:31 ` Avi Kivity 2010-07-29 6:51 ` Gleb Natapov 1 sibling, 1 reply; 16+ messages in thread From: Marcelo Tosatti @ 2010-07-28 16:21 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: > Instead of blindly attempting to inject an event before each guest entry, > check for a possible event first in vcpu->requests. Sites that can trigger > event injection are modified to set KVM_REQ_EVENT: > > - interrupt, nmi window opening > - ppr updates > - i8259 output changes > - local apic irr changes > - rflags updates > - gif flag set > - event set on exit > > This improves non-injecting entry performance, and sets the stage for > non-atomic injection. > > Signed-off-by: Avi Kivity <avi@redhat.com> > --- > arch/x86/kvm/i8259.c | 1 + > arch/x86/kvm/lapic.c | 12 ++++++++++-- > arch/x86/kvm/svm.c | 8 +++++++- > arch/x86/kvm/vmx.c | 6 ++++++ > arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- > include/linux/kvm_host.h | 1 + > 6 files changed, 51 insertions(+), 12 deletions(-) > > @@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > goto out; > } > > - inject_pending_event(vcpu); > + if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { > + inject_pending_event(vcpu); > > - /* enable NMI/IRQ window open exits if needed */ > - if (vcpu->arch.nmi_pending) > - kvm_x86_ops->enable_nmi_window(vcpu); > - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > - kvm_x86_ops->enable_irq_window(vcpu); > + /* enable NMI/IRQ window open exits if needed */ > + if (vcpu->arch.nmi_pending) > + kvm_x86_ops->enable_nmi_window(vcpu); > + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > + kvm_x86_ops->enable_irq_window(vcpu); Problem is it might not be possible to inject the event signalled by KVM_REQ_EVENT, say an interrupt from an irqchip, if there is an event that needs reinjection (or an exception). Perhaps moving atomic_set(&vcpu->guest_mode, 1) up to preemptible section is safe, because kvm_vcpu_kick can only IPI stale vcpu->cpu while preemption is enabled. In that case, it will hit if (!atomic_read(&vcpu->guest_mode) later. Although the KVM_REQ_EVENT idea is nice. Can you think of a way to fix the issue? ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-28 16:21 ` Marcelo Tosatti @ 2010-07-28 16:31 ` Avi Kivity 2010-07-28 16:37 ` Marcelo Tosatti 0 siblings, 1 reply; 16+ messages in thread From: Avi Kivity @ 2010-07-28 16:31 UTC (permalink / raw) To: Marcelo Tosatti; +Cc: kvm On 07/28/2010 07:21 PM, Marcelo Tosatti wrote: > On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: >> Instead of blindly attempting to inject an event before each guest entry, >> check for a possible event first in vcpu->requests. Sites that can trigger >> event injection are modified to set KVM_REQ_EVENT: >> >> - interrupt, nmi window opening >> - ppr updates >> - i8259 output changes >> - local apic irr changes >> - rflags updates >> - gif flag set >> - event set on exit >> >> This improves non-injecting entry performance, and sets the stage for >> non-atomic injection. >> >> Signed-off-by: Avi Kivity<avi@redhat.com> >> --- >> arch/x86/kvm/i8259.c | 1 + >> arch/x86/kvm/lapic.c | 12 ++++++++++-- >> arch/x86/kvm/svm.c | 8 +++++++- >> arch/x86/kvm/vmx.c | 6 ++++++ >> arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- >> include/linux/kvm_host.h | 1 + >> 6 files changed, 51 insertions(+), 12 deletions(-) >> >> @@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) >> goto out; >> } >> >> - inject_pending_event(vcpu); >> + if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { >> + inject_pending_event(vcpu); >> >> - /* enable NMI/IRQ window open exits if needed */ >> - if (vcpu->arch.nmi_pending) >> - kvm_x86_ops->enable_nmi_window(vcpu); >> - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) >> - kvm_x86_ops->enable_irq_window(vcpu); >> + /* enable NMI/IRQ window open exits if needed */ >> + if (vcpu->arch.nmi_pending) >> + kvm_x86_ops->enable_nmi_window(vcpu); >> + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) >> + kvm_x86_ops->enable_irq_window(vcpu); > Problem is it might not be possible to inject the event signalled by > KVM_REQ_EVENT, say an interrupt from an irqchip, if there is an event > that needs reinjection (or an exception). That can happen event now, no? A pending exception, interrupt comes along, injection picks up the exception but leaves the interrupt. Now the situation can be more complicated: - pending exception - injection - interrupt, sets KVM_REQ_EVENT - notices KVM_REQ_EVENT - drops KVM_REQ_EVENT, cancels exception (made pending again) - goes back - injection (injects exception again, interrupt is pending) as far as I can tell, this is all fine. > Perhaps moving atomic_set(&vcpu->guest_mode, 1) up to preemptible > section is safe, because kvm_vcpu_kick can only IPI stale vcpu->cpu > while preemption is enabled. In that case, it will hit > > if (!atomic_read(&vcpu->guest_mode) > > later. > I don't really follow. > Although the KVM_REQ_EVENT idea is nice. Can you think of a way > to fix the issue? -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-28 16:31 ` Avi Kivity @ 2010-07-28 16:37 ` Marcelo Tosatti 2010-07-28 16:53 ` Avi Kivity 0 siblings, 1 reply; 16+ messages in thread From: Marcelo Tosatti @ 2010-07-28 16:37 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm On Wed, Jul 28, 2010 at 07:31:03PM +0300, Avi Kivity wrote: > On 07/28/2010 07:21 PM, Marcelo Tosatti wrote: > >On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: > >>Instead of blindly attempting to inject an event before each guest entry, > >>check for a possible event first in vcpu->requests. Sites that can trigger > >>event injection are modified to set KVM_REQ_EVENT: > >> > >>- interrupt, nmi window opening > >>- ppr updates > >>- i8259 output changes > >>- local apic irr changes > >>- rflags updates > >>- gif flag set > >>- event set on exit > >> > >>This improves non-injecting entry performance, and sets the stage for > >>non-atomic injection. > >> > >>Signed-off-by: Avi Kivity<avi@redhat.com> > >>--- > >> arch/x86/kvm/i8259.c | 1 + > >> arch/x86/kvm/lapic.c | 12 ++++++++++-- > >> arch/x86/kvm/svm.c | 8 +++++++- > >> arch/x86/kvm/vmx.c | 6 ++++++ > >> arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- > >> include/linux/kvm_host.h | 1 + > >> 6 files changed, 51 insertions(+), 12 deletions(-) > >> > >>@@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > >> goto out; > >> } > >> > >>- inject_pending_event(vcpu); > >>+ if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { > >>+ inject_pending_event(vcpu); > >> > >>- /* enable NMI/IRQ window open exits if needed */ > >>- if (vcpu->arch.nmi_pending) > >>- kvm_x86_ops->enable_nmi_window(vcpu); > >>- else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > >>- kvm_x86_ops->enable_irq_window(vcpu); > >>+ /* enable NMI/IRQ window open exits if needed */ > >>+ if (vcpu->arch.nmi_pending) > >>+ kvm_x86_ops->enable_nmi_window(vcpu); > >>+ else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > >>+ kvm_x86_ops->enable_irq_window(vcpu); > >Problem is it might not be possible to inject the event signalled by > >KVM_REQ_EVENT, say an interrupt from an irqchip, if there is an event > >that needs reinjection (or an exception). > > That can happen event now, no? A pending exception, interrupt comes > along, injection picks up the exception but leaves the interrupt. > > Now the situation can be more complicated: > > - pending exception > - injection > - interrupt, sets KVM_REQ_EVENT > - notices KVM_REQ_EVENT > - drops KVM_REQ_EVENT, cancels exception (made pending again) > - goes back > - injection (injects exception again, interrupt is pending) > > as far as I can tell, this is all fine. But you cleared KVM_REQ_EVENT. Which means you're not going to inject the pending interrupt on the next entry. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-28 16:37 ` Marcelo Tosatti @ 2010-07-28 16:53 ` Avi Kivity 2010-07-28 17:22 ` Marcelo Tosatti 0 siblings, 1 reply; 16+ messages in thread From: Avi Kivity @ 2010-07-28 16:53 UTC (permalink / raw) To: Marcelo Tosatti; +Cc: kvm On 07/28/2010 07:37 PM, Marcelo Tosatti wrote: > On Wed, Jul 28, 2010 at 07:31:03PM +0300, Avi Kivity wrote: >> On 07/28/2010 07:21 PM, Marcelo Tosatti wrote: >>> On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: >>>> Instead of blindly attempting to inject an event before each guest entry, >>>> check for a possible event first in vcpu->requests. Sites that can trigger >>>> event injection are modified to set KVM_REQ_EVENT: >>>> >>>> - interrupt, nmi window opening >>>> - ppr updates >>>> - i8259 output changes >>>> - local apic irr changes >>>> - rflags updates >>>> - gif flag set >>>> - event set on exit >>>> >>>> This improves non-injecting entry performance, and sets the stage for >>>> non-atomic injection. >>>> >>>> Signed-off-by: Avi Kivity<avi@redhat.com> >>>> --- >>>> arch/x86/kvm/i8259.c | 1 + >>>> arch/x86/kvm/lapic.c | 12 ++++++++++-- >>>> arch/x86/kvm/svm.c | 8 +++++++- >>>> arch/x86/kvm/vmx.c | 6 ++++++ >>>> arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- >>>> include/linux/kvm_host.h | 1 + >>>> 6 files changed, 51 insertions(+), 12 deletions(-) >>>> >>>> @@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) >>>> goto out; >>>> } >>>> >>>> - inject_pending_event(vcpu); >>>> + if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { >>>> + inject_pending_event(vcpu); >>>> >>>> - /* enable NMI/IRQ window open exits if needed */ >>>> - if (vcpu->arch.nmi_pending) >>>> - kvm_x86_ops->enable_nmi_window(vcpu); >>>> - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) >>>> - kvm_x86_ops->enable_irq_window(vcpu); >>>> + /* enable NMI/IRQ window open exits if needed */ >>>> + if (vcpu->arch.nmi_pending) >>>> + kvm_x86_ops->enable_nmi_window(vcpu); >>>> + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) >>>> + kvm_x86_ops->enable_irq_window(vcpu); >>> Problem is it might not be possible to inject the event signalled by >>> KVM_REQ_EVENT, say an interrupt from an irqchip, if there is an event >>> that needs reinjection (or an exception). >> That can happen event now, no? A pending exception, interrupt comes >> along, injection picks up the exception but leaves the interrupt. >> >> Now the situation can be more complicated: >> >> - pending exception >> - injection >> - interrupt, sets KVM_REQ_EVENT >> - notices KVM_REQ_EVENT >> - drops KVM_REQ_EVENT, cancels exception (made pending again) >> - goes back >> - injection (injects exception again, interrupt is pending) >> >> as far as I can tell, this is all fine. > But you cleared KVM_REQ_EVENT. Which means you're not going to inject > the pending interrupt on the next entry. Doh. So we need to set KVM_REQ_EVENT again, after the final check for vcpu->requests, to make sure we redo injection again. So we can make inject_pending_event() return true if there's more in the queue, and if it did, re-raise KVM_REQ_EVENT just before entry? -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-28 16:53 ` Avi Kivity @ 2010-07-28 17:22 ` Marcelo Tosatti 2010-07-29 8:49 ` Avi Kivity 0 siblings, 1 reply; 16+ messages in thread From: Marcelo Tosatti @ 2010-07-28 17:22 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm On Wed, Jul 28, 2010 at 07:53:32PM +0300, Avi Kivity wrote: > On 07/28/2010 07:37 PM, Marcelo Tosatti wrote: > >On Wed, Jul 28, 2010 at 07:31:03PM +0300, Avi Kivity wrote: > >> On 07/28/2010 07:21 PM, Marcelo Tosatti wrote: > >>>On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: > >>>>Instead of blindly attempting to inject an event before each guest entry, > >>>>check for a possible event first in vcpu->requests. Sites that can trigger > >>>>event injection are modified to set KVM_REQ_EVENT: > >>>> > >>>>- interrupt, nmi window opening > >>>>- ppr updates > >>>>- i8259 output changes > >>>>- local apic irr changes > >>>>- rflags updates > >>>>- gif flag set > >>>>- event set on exit > >>>> > >>>>This improves non-injecting entry performance, and sets the stage for > >>>>non-atomic injection. > >>>> > >>>>Signed-off-by: Avi Kivity<avi@redhat.com> > >>>>--- > >>>> arch/x86/kvm/i8259.c | 1 + > >>>> arch/x86/kvm/lapic.c | 12 ++++++++++-- > >>>> arch/x86/kvm/svm.c | 8 +++++++- > >>>> arch/x86/kvm/vmx.c | 6 ++++++ > >>>> arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- > >>>> include/linux/kvm_host.h | 1 + > >>>> 6 files changed, 51 insertions(+), 12 deletions(-) > >>>> > >>>>@@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > >>>> goto out; > >>>> } > >>>> > >>>>- inject_pending_event(vcpu); > >>>>+ if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { > >>>>+ inject_pending_event(vcpu); > >>>> > >>>>- /* enable NMI/IRQ window open exits if needed */ > >>>>- if (vcpu->arch.nmi_pending) > >>>>- kvm_x86_ops->enable_nmi_window(vcpu); > >>>>- else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > >>>>- kvm_x86_ops->enable_irq_window(vcpu); > >>>>+ /* enable NMI/IRQ window open exits if needed */ > >>>>+ if (vcpu->arch.nmi_pending) > >>>>+ kvm_x86_ops->enable_nmi_window(vcpu); > >>>>+ else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > >>>>+ kvm_x86_ops->enable_irq_window(vcpu); > >>>Problem is it might not be possible to inject the event signalled by > >>>KVM_REQ_EVENT, say an interrupt from an irqchip, if there is an event > >>>that needs reinjection (or an exception). > >>That can happen event now, no? A pending exception, interrupt comes > >>along, injection picks up the exception but leaves the interrupt. > >> > >>Now the situation can be more complicated: > >> > >>- pending exception > >>- injection > >>- interrupt, sets KVM_REQ_EVENT > >>- notices KVM_REQ_EVENT > >>- drops KVM_REQ_EVENT, cancels exception (made pending again) > >>- goes back > >>- injection (injects exception again, interrupt is pending) > >> > >>as far as I can tell, this is all fine. > >But you cleared KVM_REQ_EVENT. Which means you're not going to inject > >the pending interrupt on the next entry. > > Doh. So we need to set KVM_REQ_EVENT again, after the final check > for vcpu->requests, to make sure we redo injection again. > > So we can make inject_pending_event() return true if there's more in > the queue, and if it did, re-raise KVM_REQ_EVENT just before entry? Yeah, that would do it. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-28 17:22 ` Marcelo Tosatti @ 2010-07-29 8:49 ` Avi Kivity 2010-07-29 15:44 ` Marcelo Tosatti 0 siblings, 1 reply; 16+ messages in thread From: Avi Kivity @ 2010-07-29 8:49 UTC (permalink / raw) To: Marcelo Tosatti; +Cc: kvm On 07/28/2010 08:22 PM, Marcelo Tosatti wrote: > >>>>> that needs reinjection (or an exception). >>>> That can happen event now, no? A pending exception, interrupt comes >>>> along, injection picks up the exception but leaves the interrupt. >>>> >>>> Now the situation can be more complicated: >>>> >>>> - pending exception >>>> - injection >>>> - interrupt, sets KVM_REQ_EVENT >>>> - notices KVM_REQ_EVENT >>>> - drops KVM_REQ_EVENT, cancels exception (made pending again) >>>> - goes back >>>> - injection (injects exception again, interrupt is pending) >>>> >>>> as far as I can tell, this is all fine. >>> But you cleared KVM_REQ_EVENT. Which means you're not going to inject >>> the pending interrupt on the next entry. >> Doh. So we need to set KVM_REQ_EVENT again, after the final check >> for vcpu->requests, to make sure we redo injection again. >> >> So we can make inject_pending_event() return true if there's more in >> the queue, and if it did, re-raise KVM_REQ_EVENT just before entry? > Yeah, that would do it. On second and third thoughts, that is unneeded. If an interrupt or nmi is still pending after event injection, we will request an interrupt or nmi window which will set KVM_REQ_EVENT. An exception cannot be pending after an event injection since it is the highest priority event. Yes? -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-29 8:49 ` Avi Kivity @ 2010-07-29 15:44 ` Marcelo Tosatti 0 siblings, 0 replies; 16+ messages in thread From: Marcelo Tosatti @ 2010-07-29 15:44 UTC (permalink / raw) To: Avi Kivity; +Cc: kvm On Thu, Jul 29, 2010 at 11:49:31AM +0300, Avi Kivity wrote: > On 07/28/2010 08:22 PM, Marcelo Tosatti wrote: > > > >>>>>that needs reinjection (or an exception). > >>>>That can happen event now, no? A pending exception, interrupt comes > >>>>along, injection picks up the exception but leaves the interrupt. > >>>> > >>>>Now the situation can be more complicated: > >>>> > >>>>- pending exception > >>>>- injection > >>>>- interrupt, sets KVM_REQ_EVENT > >>>>- notices KVM_REQ_EVENT > >>>>- drops KVM_REQ_EVENT, cancels exception (made pending again) > >>>>- goes back > >>>>- injection (injects exception again, interrupt is pending) > >>>> > >>>>as far as I can tell, this is all fine. > >>>But you cleared KVM_REQ_EVENT. Which means you're not going to inject > >>>the pending interrupt on the next entry. > >>Doh. So we need to set KVM_REQ_EVENT again, after the final check > >>for vcpu->requests, to make sure we redo injection again. > >> > >>So we can make inject_pending_event() return true if there's more in > >>the queue, and if it did, re-raise KVM_REQ_EVENT just before entry? > >Yeah, that would do it. > > On second and third thoughts, that is unneeded. If an interrupt or > nmi is still pending after event injection, we will request an > interrupt or nmi window which will set KVM_REQ_EVENT. An exception > cannot be pending after an event injection since it is the highest > priority event. > > Yes? Yep. Userspace irqchip is still broken though. Can't see whats wrong with svm. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-27 13:19 ` [PATCH 1/6] KVM: Check for pending events before attempting injection Avi Kivity 2010-07-28 16:21 ` Marcelo Tosatti @ 2010-07-29 6:51 ` Gleb Natapov 2010-07-29 8:56 ` Avi Kivity 1 sibling, 1 reply; 16+ messages in thread From: Gleb Natapov @ 2010-07-29 6:51 UTC (permalink / raw) To: Avi Kivity; +Cc: Marcelo Tosatti, kvm On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: > Instead of blindly attempting to inject an event before each guest entry, > check for a possible event first in vcpu->requests. Sites that can trigger > event injection are modified to set KVM_REQ_EVENT: > > - interrupt, nmi window opening > - ppr updates > - i8259 output changes > - local apic irr changes > - rflags updates > - gif flag set > - event set on exit > What about userspace irq chip? Does it work with this patch? I don't see that you set KVM_REQ_EVENT on ioctl(KVM_INTERRUPT) for instance and vcpu->run->request_interrupt_window should be probably checked out of if (KVM_REQ_EVEN). It looks like with this approach we scatter irq injection logic all over the code instead of having it in one place. > This improves non-injecting entry performance, and sets the stage for > non-atomic injection. > > Signed-off-by: Avi Kivity <avi@redhat.com> > --- > arch/x86/kvm/i8259.c | 1 + > arch/x86/kvm/lapic.c | 12 ++++++++++-- > arch/x86/kvm/svm.c | 8 +++++++- > arch/x86/kvm/vmx.c | 6 ++++++ > arch/x86/kvm/x86.c | 35 ++++++++++++++++++++++++++--------- > include/linux/kvm_host.h | 1 + > 6 files changed, 51 insertions(+), 12 deletions(-) > > diff --git a/arch/x86/kvm/i8259.c b/arch/x86/kvm/i8259.c > index 8d10c06..9f7ab44 100644 > --- a/arch/x86/kvm/i8259.c > +++ b/arch/x86/kvm/i8259.c > @@ -64,6 +64,7 @@ static void pic_unlock(struct kvm_pic *s) > if (!found) > found = s->kvm->bsp_vcpu; > > + kvm_make_request(KVM_REQ_EVENT, found); > kvm_vcpu_kick(found); > } > } > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c > index 77d8c0f..e83d203 100644 > --- a/arch/x86/kvm/lapic.c > +++ b/arch/x86/kvm/lapic.c > @@ -259,9 +259,10 @@ static inline int apic_find_highest_isr(struct kvm_lapic *apic) > > static void apic_update_ppr(struct kvm_lapic *apic) > { > - u32 tpr, isrv, ppr; > + u32 tpr, isrv, ppr, old_ppr; > int isr; > > + old_ppr = apic_get_reg(apic, APIC_PROCPRI); > tpr = apic_get_reg(apic, APIC_TASKPRI); > isr = apic_find_highest_isr(apic); > isrv = (isr != -1) ? isr : 0; > @@ -274,7 +275,10 @@ static void apic_update_ppr(struct kvm_lapic *apic) > apic_debug("vlapic %p, ppr 0x%x, isr 0x%x, isrv 0x%x", > apic, ppr, isr, isrv); > > - apic_set_reg(apic, APIC_PROCPRI, ppr); > + if (old_ppr != ppr) { > + apic_set_reg(apic, APIC_PROCPRI, ppr); > + kvm_make_request(KVM_REQ_EVENT, apic->vcpu); > + } > } > > static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr) > @@ -391,6 +395,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, > break; > } > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > kvm_vcpu_kick(vcpu); > break; > > @@ -416,6 +421,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, > "INIT on a runnable vcpu %d\n", > vcpu->vcpu_id); > vcpu->arch.mp_state = KVM_MP_STATE_INIT_RECEIVED; > + kvm_make_request(KVM_REQ_EVENT, vcpu); > kvm_vcpu_kick(vcpu); > } else { > apic_debug("Ignoring de-assert INIT to vcpu %d\n", > @@ -430,6 +436,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode, > result = 1; > vcpu->arch.sipi_vector = vector; > vcpu->arch.mp_state = KVM_MP_STATE_SIPI_RECEIVED; > + kvm_make_request(KVM_REQ_EVENT, vcpu); > kvm_vcpu_kick(vcpu); > } > break; > @@ -475,6 +482,7 @@ static void apic_set_eoi(struct kvm_lapic *apic) > trigger_mode = IOAPIC_EDGE_TRIG; > if (!(apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI)) > kvm_ioapic_update_eoi(apic->vcpu->kvm, vector, trigger_mode); > + kvm_make_request(KVM_REQ_EVENT, apic->vcpu); > } > > static void apic_send_ipi(struct kvm_lapic *apic) > diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c > index 56c9b6b..a51e067 100644 > --- a/arch/x86/kvm/svm.c > +++ b/arch/x86/kvm/svm.c > @@ -2258,6 +2258,7 @@ static int stgi_interception(struct vcpu_svm *svm) > > svm->next_rip = kvm_rip_read(&svm->vcpu) + 3; > skip_emulated_instruction(&svm->vcpu); > + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); > > enable_gif(svm); > > @@ -2644,6 +2645,7 @@ static int interrupt_window_interception(struct vcpu_svm *svm) > { > struct kvm_run *kvm_run = svm->vcpu.run; > > + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); > svm_clear_vintr(svm); > svm->vmcb->control.int_ctl &= ~V_IRQ_MASK; > /* > @@ -3089,8 +3091,10 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) > > svm->int3_injected = 0; > > - if (svm->vcpu.arch.hflags & HF_IRET_MASK) > + if (svm->vcpu.arch.hflags & HF_IRET_MASK) { > svm->vcpu.arch.hflags &= ~(HF_NMI_MASK | HF_IRET_MASK); > + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); > + } > > svm->vcpu.arch.nmi_injected = false; > kvm_clear_exception_queue(&svm->vcpu); > @@ -3099,6 +3103,8 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) > if (!(exitintinfo & SVM_EXITINTINFO_VALID)) > return; > > + kvm_make_request(KVM_REQ_EVENT, &svm->vcpu); > + > vector = exitintinfo & SVM_EXITINTINFO_VEC_MASK; > type = exitintinfo & SVM_EXITINTINFO_TYPE_MASK; > > diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c > index 2fdcc98..d8edfe3 100644 > --- a/arch/x86/kvm/vmx.c > +++ b/arch/x86/kvm/vmx.c > @@ -3348,6 +3348,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu) > > static int handle_tpr_below_threshold(struct kvm_vcpu *vcpu) > { > + kvm_make_request(KVM_REQ_EVENT, vcpu); > return 1; > } > > @@ -3360,6 +3361,8 @@ static int handle_interrupt_window(struct kvm_vcpu *vcpu) > cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_INTR_PENDING; > vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control); > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > + > ++vcpu->stat.irq_window_exits; > > /* > @@ -3616,6 +3619,7 @@ static int handle_nmi_window(struct kvm_vcpu *vcpu) > cpu_based_vm_exec_control &= ~CPU_BASED_VIRTUAL_NMI_PENDING; > vmcs_write32(CPU_BASED_VM_EXEC_CONTROL, cpu_based_vm_exec_control); > ++vcpu->stat.nmi_window_exits; > + kvm_make_request(KVM_REQ_EVENT, vcpu); > > return 1; > } > @@ -3849,6 +3853,8 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) > if (!idtv_info_valid) > return; > > + kvm_make_request(KVM_REQ_EVENT, &vmx->vcpu); > + > vector = idt_vectoring_info & VECTORING_INFO_VECTOR_MASK; > type = idt_vectoring_info & VECTORING_INFO_TYPE_MASK; > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 76fbc32..38e91b6 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -282,6 +282,8 @@ static void kvm_multiple_exception(struct kvm_vcpu *vcpu, > u32 prev_nr; > int class1, class2; > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > + > if (!vcpu->arch.exception.pending) { > queue: > vcpu->arch.exception.pending = true; > @@ -337,6 +339,7 @@ void kvm_inject_page_fault(struct kvm_vcpu *vcpu, unsigned long addr, > > void kvm_inject_nmi(struct kvm_vcpu *vcpu) > { > + kvm_make_request(KVM_REQ_EVENT, vcpu); > vcpu->arch.nmi_pending = 1; > } > EXPORT_SYMBOL_GPL(kvm_inject_nmi); > @@ -2356,6 +2359,8 @@ static int kvm_vcpu_ioctl_x86_set_vcpu_events(struct kvm_vcpu *vcpu, > if (events->flags & KVM_VCPUEVENT_VALID_SIPI_VECTOR) > vcpu->arch.sipi_vector = events->sipi_vector; > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > + > return 0; > } > > @@ -4059,6 +4064,7 @@ restart: > > toggle_interruptibility(vcpu, vcpu->arch.emulate_ctxt.interruptibility); > kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags); > + kvm_make_request(KVM_REQ_EVENT, vcpu); > memcpy(vcpu->arch.regs, c->regs, sizeof c->regs); > kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip); > > @@ -4731,17 +4737,19 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) > goto out; > } > > - inject_pending_event(vcpu); > + if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { > + inject_pending_event(vcpu); > > - /* enable NMI/IRQ window open exits if needed */ > - if (vcpu->arch.nmi_pending) > - kvm_x86_ops->enable_nmi_window(vcpu); > - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > - kvm_x86_ops->enable_irq_window(vcpu); > + /* enable NMI/IRQ window open exits if needed */ > + if (vcpu->arch.nmi_pending) > + kvm_x86_ops->enable_nmi_window(vcpu); > + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) > + kvm_x86_ops->enable_irq_window(vcpu); > > - if (kvm_lapic_enabled(vcpu)) { > - update_cr8_intercept(vcpu); > - kvm_lapic_sync_to_vapic(vcpu); > + if (kvm_lapic_enabled(vcpu)) { > + update_cr8_intercept(vcpu); > + kvm_lapic_sync_to_vapic(vcpu); > + } > } > > srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); > @@ -4980,6 +4988,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs) > > vcpu->arch.exception.pending = false; > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > + > return 0; > } > > @@ -5043,6 +5053,7 @@ int kvm_arch_vcpu_ioctl_set_mpstate(struct kvm_vcpu *vcpu, > struct kvm_mp_state *mp_state) > { > vcpu->arch.mp_state = mp_state->mp_state; > + kvm_make_request(KVM_REQ_EVENT, vcpu); > return 0; > } > > @@ -5077,6 +5088,7 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int reason, > memcpy(vcpu->arch.regs, c->regs, sizeof c->regs); > kvm_rip_write(vcpu, vcpu->arch.emulate_ctxt.eip); > kvm_x86_ops->set_rflags(vcpu, vcpu->arch.emulate_ctxt.eflags); > + kvm_make_request(KVM_REQ_EVENT, vcpu); > return EMULATE_DONE; > } > EXPORT_SYMBOL_GPL(kvm_task_switch); > @@ -5147,6 +5159,8 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, > !is_protmode(vcpu)) > vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE; > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > + > return 0; > } > > @@ -5375,6 +5389,8 @@ int kvm_arch_vcpu_reset(struct kvm_vcpu *vcpu) > vcpu->arch.dr6 = DR6_FIXED_1; > vcpu->arch.dr7 = DR7_FIXED_1; > > + kvm_make_request(KVM_REQ_EVENT, vcpu); > + > return kvm_x86_ops->vcpu_reset(vcpu); > } > > @@ -5683,6 +5699,7 @@ void kvm_set_rflags(struct kvm_vcpu *vcpu, unsigned long rflags) > kvm_is_linear_rip(vcpu, vcpu->arch.singlestep_rip)) > rflags |= X86_EFLAGS_TF; > kvm_x86_ops->set_rflags(vcpu, rflags); > + kvm_make_request(KVM_REQ_EVENT, vcpu); > } > EXPORT_SYMBOL_GPL(kvm_set_rflags); > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index c13cc48..e41e66b 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -39,6 +39,7 @@ > #define KVM_REQ_KVMCLOCK_UPDATE 8 > #define KVM_REQ_KICK 9 > #define KVM_REQ_DEACTIVATE_FPU 10 > +#define KVM_REQ_EVENT 11 > > #define KVM_USERSPACE_IRQ_SOURCE_ID 0 > > -- > 1.7.1 > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Gleb. ^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH 1/6] KVM: Check for pending events before attempting injection 2010-07-29 6:51 ` Gleb Natapov @ 2010-07-29 8:56 ` Avi Kivity 0 siblings, 0 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-29 8:56 UTC (permalink / raw) To: Gleb Natapov; +Cc: Marcelo Tosatti, kvm On 07/29/2010 09:51 AM, Gleb Natapov wrote: > On Tue, Jul 27, 2010 at 04:19:35PM +0300, Avi Kivity wrote: >> Instead of blindly attempting to inject an event before each guest entry, >> check for a possible event first in vcpu->requests. Sites that can trigger >> event injection are modified to set KVM_REQ_EVENT: >> >> - interrupt, nmi window opening >> - ppr updates >> - i8259 output changes >> - local apic irr changes >> - rflags updates >> - gif flag set >> - event set on exit >> > What about userspace irq chip? Does it work with this patch? I don't see > that you set KVM_REQ_EVENT on ioctl(KVM_INTERRUPT) for instance and > vcpu->run->request_interrupt_window should be probably checked out of > if (KVM_REQ_EVEN). Right. > It looks like with this approach we scatter irq > injection logic all over the code instead of having it in one place. One place is better, but it means we have to poll all event types on every entry. We can go back to one place by having a mini-API for events (extending the kvm_queue_exception family) that would take care of the details. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH 2/6] KVM: VMX: Split up vmx_complete_interrupts() 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity 2010-07-27 13:19 ` [PATCH 1/6] KVM: Check for pending events before attempting injection Avi Kivity @ 2010-07-27 13:19 ` Avi Kivity 2010-07-27 13:19 ` [PATCH 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() Avi Kivity ` (3 subsequent siblings) 5 siblings, 0 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw) To: Marcelo Tosatti, kvm vmx_complete_interrupts() does too much, split it up: - vmx_vcpu_run() gets the "cache important vmcs fields" part - a new vmx_complete_atomic_exit() gets the parts that must be done atomically - a new vmx_recover_nmi_blocking() does what its name says - vmx_complete_interrupts() retains the event injection recovery code This helps in reducing the work done in atomic context. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/kvm/vmx.c | 39 +++++++++++++++++++++++++++------------ 1 files changed, 27 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index d8edfe3..7483da7 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -125,6 +125,7 @@ struct vcpu_vmx { unsigned long host_rsp; int launched; u8 fail; + u32 exit_intr_info; u32 idt_vectoring_info; struct shared_msr_entry *guest_msrs; int nmsrs; @@ -3796,18 +3797,9 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) vmcs_write32(TPR_THRESHOLD, irr); } -static void vmx_complete_interrupts(struct vcpu_vmx *vmx) +static void vmx_complete_atomic_exit(struct vcpu_vmx *vmx) { - u32 exit_intr_info; - u32 idt_vectoring_info = vmx->idt_vectoring_info; - bool unblock_nmi; - u8 vector; - int type; - bool idtv_info_valid; - - exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO); - - vmx->exit_reason = vmcs_read32(VM_EXIT_REASON); + u32 exit_intr_info = vmx->exit_intr_info; /* Handle machine checks before interrupts are enabled */ if ((vmx->exit_reason == EXIT_REASON_MCE_DURING_VMENTRY) @@ -3822,8 +3814,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) asm("int $2"); kvm_after_handle_nmi(&vmx->vcpu); } +} - idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK; +static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) +{ + u32 exit_intr_info = vmx->exit_intr_info; + bool unblock_nmi; + u8 vector; + bool idtv_info_valid; + + idtv_info_valid = vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK; if (cpu_has_virtual_nmis()) { unblock_nmi = (exit_intr_info & INTR_INFO_UNBLOCK_NMI) != 0; @@ -3845,6 +3845,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) } else if (unlikely(vmx->soft_vnmi_blocked)) vmx->vnmi_blocked_time += ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time)); +} + +static void vmx_complete_interrupts(struct vcpu_vmx *vmx) +{ + u32 idt_vectoring_info = vmx->idt_vectoring_info; + u8 vector; + int type; + bool idtv_info_valid; + + idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK; vmx->vcpu.arch.nmi_injected = false; kvm_clear_exception_queue(&vmx->vcpu); @@ -4057,6 +4067,11 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS)); vmx->launched = 1; + vmx->exit_reason = vmcs_read32(VM_EXIT_REASON); + vmx->exit_intr_info = vmcs_read32(VM_EXIT_INTR_INFO); + + vmx_complete_atomic_exit(vmx); + vmx_recover_nmi_blocking(vmx); vmx_complete_interrupts(vmx); } -- 1.7.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity 2010-07-27 13:19 ` [PATCH 1/6] KVM: Check for pending events before attempting injection Avi Kivity 2010-07-27 13:19 ` [PATCH 2/6] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity @ 2010-07-27 13:19 ` Avi Kivity 2010-07-27 13:19 ` [PATCH 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity ` (2 subsequent siblings) 5 siblings, 0 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw) To: Marcelo Tosatti, kvm This allows reuse of vmx_complete_interrupts() for cancelling injections. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/kvm/vmx.c | 9 ++++++--- 1 files changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 7483da7..738b21f 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -182,6 +182,7 @@ static int init_rmode(struct kvm *kvm); static u64 construct_eptp(unsigned long root_hpa); static void kvm_cpu_vmxon(u64 addr); static void kvm_cpu_vmxoff(void); +static void fixup_rmode_irq(struct vcpu_vmx *vmx); static DEFINE_PER_CPU(struct vmcs *, vmxarea); static DEFINE_PER_CPU(struct vmcs *, current_vmcs); @@ -3849,11 +3850,15 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) static void vmx_complete_interrupts(struct vcpu_vmx *vmx) { - u32 idt_vectoring_info = vmx->idt_vectoring_info; + u32 idt_vectoring_info; u8 vector; int type; bool idtv_info_valid; + if (vmx->rmode.irq.pending) + fixup_rmode_irq(vmx); + + idt_vectoring_info = vmx->idt_vectoring_info; idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK; vmx->vcpu.arch.nmi_injected = false; @@ -4061,8 +4066,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) vcpu->arch.regs_dirty = 0; vmx->idt_vectoring_info = vmcs_read32(IDT_VECTORING_INFO_FIELD); - if (vmx->rmode.irq.pending) - fixup_rmode_irq(vmx); asm("mov %0, %%ds; mov %0, %%es" : : "r"(__USER_DS)); vmx->launched = 1; -- 1.7.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity ` (2 preceding siblings ...) 2010-07-27 13:19 ` [PATCH 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() Avi Kivity @ 2010-07-27 13:19 ` Avi Kivity 2010-07-27 13:19 ` [PATCH 5/6] KVM: Non-atomic interrupt injection Avi Kivity 2010-07-27 13:19 ` [PATCH 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration Avi Kivity 5 siblings, 0 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw) To: Marcelo Tosatti, kvm Currently vmx_complete_interrupts() can decode event information from vmx exit fields into the generic kvm event queues. Make it able to decode the information from the entry fields as well by parametrizing it. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/kvm/vmx.c | 34 +++++++++++++++++++++------------- 1 files changed, 21 insertions(+), 13 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 738b21f..99a75d3 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -182,7 +182,7 @@ static int init_rmode(struct kvm *kvm); static u64 construct_eptp(unsigned long root_hpa); static void kvm_cpu_vmxon(u64 addr); static void kvm_cpu_vmxoff(void); -static void fixup_rmode_irq(struct vcpu_vmx *vmx); +static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info); static DEFINE_PER_CPU(struct vmcs *, vmxarea); static DEFINE_PER_CPU(struct vmcs *, current_vmcs); @@ -3848,17 +3848,18 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time)); } -static void vmx_complete_interrupts(struct vcpu_vmx *vmx) +static void __vmx_complete_interrupts(struct vcpu_vmx *vmx, + u32 idt_vectoring_info, + int instr_len_field, + int error_code_field) { - u32 idt_vectoring_info; u8 vector; int type; bool idtv_info_valid; if (vmx->rmode.irq.pending) - fixup_rmode_irq(vmx); + fixup_rmode_irq(vmx, &idt_vectoring_info); - idt_vectoring_info = vmx->idt_vectoring_info; idtv_info_valid = idt_vectoring_info & VECTORING_INFO_VALID_MASK; vmx->vcpu.arch.nmi_injected = false; @@ -3886,18 +3887,18 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) break; case INTR_TYPE_SOFT_EXCEPTION: vmx->vcpu.arch.event_exit_inst_len = - vmcs_read32(VM_EXIT_INSTRUCTION_LEN); + vmcs_read32(instr_len_field); /* fall through */ case INTR_TYPE_HARD_EXCEPTION: if (idt_vectoring_info & VECTORING_INFO_DELIVER_CODE_MASK) { - u32 err = vmcs_read32(IDT_VECTORING_ERROR_CODE); + u32 err = vmcs_read32(error_code_field); kvm_queue_exception_e(&vmx->vcpu, vector, err); } else kvm_queue_exception(&vmx->vcpu, vector); break; case INTR_TYPE_SOFT_INTR: vmx->vcpu.arch.event_exit_inst_len = - vmcs_read32(VM_EXIT_INSTRUCTION_LEN); + vmcs_read32(instr_len_field); /* fall through */ case INTR_TYPE_EXT_INTR: kvm_queue_interrupt(&vmx->vcpu, vector, @@ -3908,24 +3909,31 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) } } +static void vmx_complete_interrupts(struct vcpu_vmx *vmx) +{ + __vmx_complete_interrupts(vmx, vmx->idt_vectoring_info, + VM_EXIT_INSTRUCTION_LEN, + IDT_VECTORING_ERROR_CODE); +} + /* * Failure to inject an interrupt should give us the information * in IDT_VECTORING_INFO_FIELD. However, if the failure occurs * when fetching the interrupt redirection bitmap in the real-mode * tss, this doesn't happen. So we do it ourselves. */ -static void fixup_rmode_irq(struct vcpu_vmx *vmx) +static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info) { vmx->rmode.irq.pending = 0; if (kvm_rip_read(&vmx->vcpu) + 1 != vmx->rmode.irq.rip) return; kvm_rip_write(&vmx->vcpu, vmx->rmode.irq.rip); - if (vmx->idt_vectoring_info & VECTORING_INFO_VALID_MASK) { - vmx->idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK; - vmx->idt_vectoring_info |= INTR_TYPE_EXT_INTR; + if (*idt_vectoring_info & VECTORING_INFO_VALID_MASK) { + *idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK; + *idt_vectoring_info |= INTR_TYPE_EXT_INTR; return; } - vmx->idt_vectoring_info = + *idt_vectoring_info = VECTORING_INFO_VALID_MASK | INTR_TYPE_EXT_INTR | vmx->rmode.irq.vector; -- 1.7.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 5/6] KVM: Non-atomic interrupt injection 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity ` (3 preceding siblings ...) 2010-07-27 13:19 ` [PATCH 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity @ 2010-07-27 13:19 ` Avi Kivity 2010-07-27 13:19 ` [PATCH 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration Avi Kivity 5 siblings, 0 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw) To: Marcelo Tosatti, kvm Change the interrupt injection code to work from preemptible, interrupts enabled context. This works by adding a ->cancel_injection() operation that undoes an injection in case we were not able to actually enter the guest (this condition could never happen with atomic injection). Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm.c | 12 ++++++++++++ arch/x86/kvm/vmx.c | 11 +++++++++++ arch/x86/kvm/x86.c | 31 ++++++++++++++++--------------- 4 files changed, 40 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 502e53f..5dd797c 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -505,6 +505,7 @@ struct kvm_x86_ops { void (*queue_exception)(struct kvm_vcpu *vcpu, unsigned nr, bool has_error_code, u32 error_code, bool reinject); + void (*cancel_injection)(struct kvm_vcpu *vcpu); int (*interrupt_allowed)(struct kvm_vcpu *vcpu); int (*nmi_allowed)(struct kvm_vcpu *vcpu); bool (*get_nmi_mask)(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c index a51e067..4d8a858 100644 --- a/arch/x86/kvm/svm.c +++ b/arch/x86/kvm/svm.c @@ -3141,6 +3141,17 @@ static void svm_complete_interrupts(struct vcpu_svm *svm) } } +static void svm_cancel_injection(struct kvm_vcpu *vcpu) +{ + struct vcpu_svm *svm = to_svm(vcpu); + struct vmcb_control_area *control = &svm->vmcb->control; + + control->exit_int_info = control->event_inj; + control->exit_int_info_err = control->event_inj_err; + control->event_inj = 0; + svm_complete_interrupts(svm); +} + #ifdef CONFIG_X86_64 #define R "r" #else @@ -3499,6 +3510,7 @@ static struct kvm_x86_ops svm_x86_ops = { .set_irq = svm_set_irq, .set_nmi = svm_inject_nmi, .queue_exception = svm_queue_exception, + .cancel_injection = svm_cancel_injection, .interrupt_allowed = svm_interrupt_allowed, .nmi_allowed = svm_nmi_allowed, .get_nmi_mask = svm_get_nmi_mask, diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 99a75d3..4122baa 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -3916,6 +3916,16 @@ static void vmx_complete_interrupts(struct vcpu_vmx *vmx) IDT_VECTORING_ERROR_CODE); } +static void vmx_cancel_injection(struct kvm_vcpu *vcpu) +{ + __vmx_complete_interrupts(to_vmx(vcpu), + vmcs_read32(VM_ENTRY_INTR_INFO_FIELD), + VM_ENTRY_INSTRUCTION_LEN, + VM_ENTRY_EXCEPTION_ERROR_CODE); + + vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0); +} + /* * Failure to inject an interrupt should give us the information * in IDT_VECTORING_INFO_FIELD. However, if the failure occurs @@ -4368,6 +4378,7 @@ static struct kvm_x86_ops vmx_x86_ops = { .set_irq = vmx_inject_irq, .set_nmi = vmx_inject_nmi, .queue_exception = vmx_queue_exception, + .cancel_injection = vmx_cancel_injection, .interrupt_allowed = vmx_interrupt_allowed, .nmi_allowed = vmx_nmi_allowed, .get_nmi_mask = vmx_get_nmi_mask, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 38e91b6..1ea3f8b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4715,6 +4715,21 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (unlikely(r)) goto out; + if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { + inject_pending_event(vcpu); + + /* enable NMI/IRQ window open exits if needed */ + if (vcpu->arch.nmi_pending) + kvm_x86_ops->enable_nmi_window(vcpu); + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) + kvm_x86_ops->enable_irq_window(vcpu); + + if (kvm_lapic_enabled(vcpu)) { + update_cr8_intercept(vcpu); + kvm_lapic_sync_to_vapic(vcpu); + } + } + preempt_disable(); kvm_x86_ops->prepare_guest_switch(vcpu); @@ -4733,25 +4748,11 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) smp_wmb(); local_irq_enable(); preempt_enable(); + kvm_x86_ops->cancel_injection(vcpu); r = 1; goto out; } - if (kvm_check_request(KVM_REQ_EVENT, vcpu)) { - inject_pending_event(vcpu); - - /* enable NMI/IRQ window open exits if needed */ - if (vcpu->arch.nmi_pending) - kvm_x86_ops->enable_nmi_window(vcpu); - else if (kvm_cpu_has_interrupt(vcpu) || req_int_win) - kvm_x86_ops->enable_irq_window(vcpu); - - if (kvm_lapic_enabled(vcpu)) { - update_cr8_intercept(vcpu); - kvm_lapic_sync_to_vapic(vcpu); - } - } - srcu_read_unlock(&vcpu->kvm->srcu, vcpu->srcu_idx); kvm_guest_enter(); -- 1.7.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity ` (4 preceding siblings ...) 2010-07-27 13:19 ` [PATCH 5/6] KVM: Non-atomic interrupt injection Avi Kivity @ 2010-07-27 13:19 ` Avi Kivity 5 siblings, 0 replies; 16+ messages in thread From: Avi Kivity @ 2010-07-27 13:19 UTC (permalink / raw) To: Marcelo Tosatti, kvm No code changes. Signed-off-by: Avi Kivity <avi@redhat.com> --- arch/x86/kvm/vmx.c | 47 +++++++++++++++++++++++------------------------ 1 files changed, 23 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c index 4122baa..7686a7a 100644 --- a/arch/x86/kvm/vmx.c +++ b/arch/x86/kvm/vmx.c @@ -182,7 +182,6 @@ static int init_rmode(struct kvm *kvm); static u64 construct_eptp(unsigned long root_hpa); static void kvm_cpu_vmxon(u64 addr); static void kvm_cpu_vmxoff(void); -static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info); static DEFINE_PER_CPU(struct vmcs *, vmxarea); static DEFINE_PER_CPU(struct vmcs *, current_vmcs); @@ -3848,6 +3847,29 @@ static void vmx_recover_nmi_blocking(struct vcpu_vmx *vmx) ktime_to_ns(ktime_sub(ktime_get(), vmx->entry_time)); } +/* + * Failure to inject an interrupt should give us the information + * in IDT_VECTORING_INFO_FIELD. However, if the failure occurs + * when fetching the interrupt redirection bitmap in the real-mode + * tss, this doesn't happen. So we do it ourselves. + */ +static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info) +{ + vmx->rmode.irq.pending = 0; + if (kvm_rip_read(&vmx->vcpu) + 1 != vmx->rmode.irq.rip) + return; + kvm_rip_write(&vmx->vcpu, vmx->rmode.irq.rip); + if (*idt_vectoring_info & VECTORING_INFO_VALID_MASK) { + *idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK; + *idt_vectoring_info |= INTR_TYPE_EXT_INTR; + return; + } + *idt_vectoring_info = + VECTORING_INFO_VALID_MASK + | INTR_TYPE_EXT_INTR + | vmx->rmode.irq.vector; +} + static void __vmx_complete_interrupts(struct vcpu_vmx *vmx, u32 idt_vectoring_info, int instr_len_field, @@ -3926,29 +3948,6 @@ static void vmx_cancel_injection(struct kvm_vcpu *vcpu) vmcs_write32(VM_ENTRY_INTR_INFO_FIELD, 0); } -/* - * Failure to inject an interrupt should give us the information - * in IDT_VECTORING_INFO_FIELD. However, if the failure occurs - * when fetching the interrupt redirection bitmap in the real-mode - * tss, this doesn't happen. So we do it ourselves. - */ -static void fixup_rmode_irq(struct vcpu_vmx *vmx, u32 *idt_vectoring_info) -{ - vmx->rmode.irq.pending = 0; - if (kvm_rip_read(&vmx->vcpu) + 1 != vmx->rmode.irq.rip) - return; - kvm_rip_write(&vmx->vcpu, vmx->rmode.irq.rip); - if (*idt_vectoring_info & VECTORING_INFO_VALID_MASK) { - *idt_vectoring_info &= ~VECTORING_INFO_TYPE_MASK; - *idt_vectoring_info |= INTR_TYPE_EXT_INTR; - return; - } - *idt_vectoring_info = - VECTORING_INFO_VALID_MASK - | INTR_TYPE_EXT_INTR - | vmx->rmode.irq.vector; -} - #ifdef CONFIG_X86_64 #define R "r" #define Q "q" -- 1.7.1 ^ permalink raw reply related [flat|nested] 16+ messages in thread
end of thread, other threads:[~2010-07-29 15:45 UTC | newest] Thread overview: 16+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-07-27 13:19 [PATCH 0/6] Nonatomic interrupt injection Avi Kivity 2010-07-27 13:19 ` [PATCH 1/6] KVM: Check for pending events before attempting injection Avi Kivity 2010-07-28 16:21 ` Marcelo Tosatti 2010-07-28 16:31 ` Avi Kivity 2010-07-28 16:37 ` Marcelo Tosatti 2010-07-28 16:53 ` Avi Kivity 2010-07-28 17:22 ` Marcelo Tosatti 2010-07-29 8:49 ` Avi Kivity 2010-07-29 15:44 ` Marcelo Tosatti 2010-07-29 6:51 ` Gleb Natapov 2010-07-29 8:56 ` Avi Kivity 2010-07-27 13:19 ` [PATCH 2/6] KVM: VMX: Split up vmx_complete_interrupts() Avi Kivity 2010-07-27 13:19 ` [PATCH 3/6] KVM: VMX: Move real-mode interrupt injection fixup to vmx_complete_interrupts() Avi Kivity 2010-07-27 13:19 ` [PATCH 4/6] KVM: VMX: Parameterize vmx_complete_interrupts() for both exit and entry Avi Kivity 2010-07-27 13:19 ` [PATCH 5/6] KVM: Non-atomic interrupt injection Avi Kivity 2010-07-27 13:19 ` [PATCH 6/6] KVM: VMX: Move fixup_rmode_irq() to avoid forward declaration Avi Kivity
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox