From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46042) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlI3D-00016v-Fo for qemu-devel@nongnu.org; Tue, 26 Nov 2013 07:47:41 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VlI37-0000D8-FF for qemu-devel@nongnu.org; Tue, 26 Nov 2013 07:47:35 -0500 Received: from mx1.redhat.com ([209.132.183.28]:10689) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlI37-0000D2-5Y for qemu-devel@nongnu.org; Tue, 26 Nov 2013 07:47:29 -0500 Message-ID: <52949847.6020908@redhat.com> Date: Tue, 26 Nov 2013 13:47:03 +0100 From: Paolo Bonzini MIME-Version: 1.0 References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Zhanghaoyu (A)" Cc: "Huangweidong (C)" , KVM , Gleb Natapov , "Michael S. Tsirkin" , "Jinxin (F)" , Luonengjun , "qemu-devel@nongnu.org" , Zanghongyong Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto: > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu thread will IOCTL return to QEMU from hypervisor, then vcpu thread ask the hypervisor to update the irq routing table, > in kvm_set_irq_routing, synchronize_rcu is called, current vcpu thread is blocked for so much time to wait RCU grace period, and during this period, this vcpu cannot provide service to VM, > so those interrupts delivered to this vcpu cannot be handled in time, and the apps running on this vcpu cannot be serviced too. > It's unacceptable in some real-time scenario, e.g. telecom. > > So, I want to create a single workqueue for each VM, to asynchronously performing the RCU synchronization for irq routing table, > and let the vcpu thread return and VMENTRY to service VM immediately, no more need to blocked to wait RCU grace period. > And, I have implemented a raw patch, took a test in our telecom environment, above problem disappeared. I don't think a workqueue is even needed. You just need to use call_rcu to free "old" after releasing kvm->irq_lock. What do you think? Paolo