From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48454) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlKBU-0002w8-Ri for qemu-devel@nongnu.org; Tue, 26 Nov 2013 10:04:22 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VlKBN-0001Yf-Hq for qemu-devel@nongnu.org; Tue, 26 Nov 2013 10:04:16 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42829) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlKBN-0001YM-8r for qemu-devel@nongnu.org; Tue, 26 Nov 2013 10:04:09 -0500 Date: Tue, 26 Nov 2013 17:03:58 +0200 From: Gleb Natapov Message-ID: <20131126150357.GA20352@redhat.com> References: <52949847.6020908@redhat.com> <5294A68F.6060301@redhat.com> <5294B461.5000405@redhat.com> <5294B634.4050801@cloudius-systems.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5294B634.4050801@cloudius-systems.com> Subject: Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: "Huangweidong (C)" , KVM , "Michael S. Tsirkin" , "Zhanghaoyu (A)" , Luonengjun , "qemu-devel@nongnu.org" , Zanghongyong , Avi Kivity , Paolo Bonzini , "Jinxin (F)" On Tue, Nov 26, 2013 at 04:54:44PM +0200, Avi Kivity wrote: > On 11/26/2013 04:46 PM, Paolo Bonzini wrote: > >Il 26/11/2013 15:36, Avi Kivity ha scritto: > >> No, this would be exactly the same code that is running now: > >> > >> mutex_lock(&kvm->irq_lock); > >> old = kvm->irq_routing; > >> kvm_irq_routing_update(kvm, new); > >> mutex_unlock(&kvm->irq_lock); > >> > >> synchronize_rcu(); > >> kfree(old); > >> return 0; > >> > >> Except that the kfree would run in the call_rcu kernel thread instead of > >> the vcpu thread. But the vcpus already see the new routing table after > >> the rcu_assign_pointer that is in kvm_irq_routing_update. > >> > >>I understood the proposal was also to eliminate the synchronize_rcu(), > >>so while new interrupts would see the new routing table, interrupts > >>already in flight could pick up the old one. > >Isn't that always the case with RCU? (See my answer above: "the vcpus > >already see the new routing table after the rcu_assign_pointer that is > >in kvm_irq_routing_update"). > > With synchronize_rcu(), you have the additional guarantee that any > parallel accesses to the old routing table have completed. Since we > also trigger the irq from rcu context, you know that after > synchronize_rcu() you won't get any interrupts to the old > destination (see kvm_set_irq_inatomic()). We do not have this guaranty for other vcpus that do not call synchronize_rcu(). They may still use outdated routing table while a vcpu or iothread that performed table update sits in synchronize_rcu(). -- Gleb.