From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47881) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlIBi-0007JM-6l for qemu-devel@nongnu.org; Tue, 26 Nov 2013 07:56:28 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VlIBc-00038u-75 for qemu-devel@nongnu.org; Tue, 26 Nov 2013 07:56:22 -0500 Received: from mx1.redhat.com ([209.132.183.28]:5037) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlIBb-00038o-V2 for qemu-devel@nongnu.org; Tue, 26 Nov 2013 07:56:16 -0500 Date: Tue, 26 Nov 2013 14:56:10 +0200 From: Gleb Natapov Message-ID: <20131126125610.GM959@redhat.com> References: <52949847.6020908@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <52949847.6020908@redhat.com> Subject: Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: "Huangweidong (C)" , KVM , "Michael S. Tsirkin" , "Jinxin (F)" , "Zhanghaoyu (A)" , Luonengjun , "qemu-devel@nongnu.org" , Zanghongyong On Tue, Nov 26, 2013 at 01:47:03PM +0100, Paolo Bonzini wrote: > Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto: > > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor to update the irq routing table, > > in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread is blocked for so much time to wait RCU grace period, and during this period, this vcpu cannot provide service to VM, > > so those interrupts delivered to this vcpu cannot be handled in time, and the apps running on this vcpu cannot be serviced too. > > It's unacceptable in some real-time scenario, e.g. telecom. > > > > So, I want to create a single workqueue for each VM, to asynchronously performing the RCU synchronization for irq routing table, > > and let the vcpu thread return and VMENTRY to service VM immediately, no more need to blocked to wait RCU grace period. > > And, I have implemented a raw patch, took a test in our telecom environment, above problem disappeared. > > I don't think a workqueue is even needed. You just need to use call_rcu > to free "old" after releasing kvm->irq_lock. > > What do you think? > It should be rate limited somehow. Since it guest triggarable guest may cause host to allocate a lot of memory this way. Is this about MSI interrupt affinity? IIRC changing INT interrupt affinity should not trigger kvm_set_irq_routing update. If this is about MSI only then what about changing userspace to use KVM_SIGNAL_MSI for MSI injection? -- Gleb.