From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH v5 4/4] KVM: x86: Add support for local interrupt requests from userspace Date: Wed, 29 Jul 2015 00:05:37 +0200 Message-ID: <55B7FCB1.1020508@redhat.com> References: <1438039062-3168-1-git-send-email-srutherford@google.com> <1438039062-3168-4-git-send-email-srutherford@google.com> <55B7A6AE.5070001@redhat.com> <20150728190635.GA7460@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8bit To: Steve Rutherford , KVM list Return-path: Received: from mail-wi0-f170.google.com ([209.85.212.170]:33551 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750867AbbG1WFm (ORCPT ); Tue, 28 Jul 2015 18:05:42 -0400 Received: by wicmv11 with SMTP id mv11so196349689wic.0 for ; Tue, 28 Jul 2015 15:05:41 -0700 (PDT) In-Reply-To: <20150728190635.GA7460@google.com> Sender: kvm-owner@vger.kernel.org List-ID: On 28/07/2015 21:06, Steve Rutherford wrote: >>> > > + if (!kvm_run->ready_for_interrupt_injection && >>> > > + ready_for_interrupt_injection) >>> > > + kvm_make_request(KVM_REQ_PIC_UNMASK_EXIT, vcpu); >>> > > + >>> > > + kvm_run->ready_for_interrupt_injection = >>> > > + ready_for_interrupt_injection; >>> > > + } else { >>> > > kvm_run->ready_for_interrupt_injection = >>> > > kvm_arch_interrupt_allowed(vcpu) && >>> > > !kvm_cpu_has_interrupt(vcpu) && >>> > > !kvm_event_needs_reinjection(vcpu); >>> > > + } >>> > > } >>> > > >>> > > static void update_cr8_intercept(struct kvm_vcpu *vcpu) >> > >> > Why is this necessary? Could it just set >> > kvm_run->ready_for_interrupt_injection as in the pic_in_kernel case? > The goal is to couple the interrupt ack cycle as closely as possible > with the injection of the local interrupt (which occur more or less > atomically on real hardware). The idea is to only ever attempt to > inject local interrupts when the CPU/APIC is ready to immediately > accept. Ok, I understand it now. However, you're still not causing an exit when LVT0 changes, are you? post_kvm_run_save is not run until the next exit to userspace, which could be a long time later. So, I think that you do not need KVM_REQ_PIC_UNMASK_EXIT. Instead, you can modify dm_request_for_irq_injection to handle the split-irqchip case, like this: if (!vcpu->run->request_interrupt_window || pic_in_kernel(vcpu->kvm)) return false; if (kvm_cpu_has_interrupt(vcpu)) return false; return (irqchip_split(vcpu->kvm) ? kvm_apic_accept_pic_intr(vcpu) : kvm_arch_interrupt_allowed(vcpu)); This will cause KVM_RUN to return -EINTR, which QEMU happens to handle the same way as KVM_EXIT_IRQ_WINDOW_OPEN. If you prefer the explicit reason, this small change will provide it: diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5ef2560075bf..3269169233fb 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6720,8 +6720,8 @@ static int vcpu_run(struct kvm_vcpu *vcpu) kvm_inject_pending_timer_irqs(vcpu); if (dm_request_for_irq_injection(vcpu)) { - r = -EINTR; - vcpu->run->exit_reason = KVM_EXIT_INTR; + r = 0; + vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN; ++vcpu->stat.request_irq_exits; break; } Feel free to post v6 of this patch only. Everything else is mostly okay; there are some leftovers here and there (lapic_in_kernel, GET_VECTOR_FROM_USERSPACE) but I can fix that. How is the integration with QEMU going? With this latest iteration it should be relatively easy. Paolo