From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46648) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlK2T-0008An-7n for qemu-devel@nongnu.org; Tue, 26 Nov 2013 09:55:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VlK2L-00071G-Tu for qemu-devel@nongnu.org; Tue, 26 Nov 2013 09:54:57 -0500 Received: from mail-ea0-f172.google.com ([209.85.215.172]:46088) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VlK2L-00071A-OE for qemu-devel@nongnu.org; Tue, 26 Nov 2013 09:54:49 -0500 Received: by mail-ea0-f172.google.com with SMTP id q10so3605204ead.17 for ; Tue, 26 Nov 2013 06:54:48 -0800 (PST) Message-ID: <5294B634.4050801@cloudius-systems.com> Date: Tue, 26 Nov 2013 16:54:44 +0200 From: Avi Kivity MIME-Version: 1.0 References: <52949847.6020908@redhat.com> <5294A68F.6060301@redhat.com> <5294B461.5000405@redhat.com> In-Reply-To: <5294B461.5000405@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: "Huangweidong (C)" , KVM , Gleb Natapov , "Michael S. Tsirkin" , "Zhanghaoyu (A)" , Luonengjun , "qemu-devel@nongnu.org" , Zanghongyong , Avi Kivity , "Jinxin (F)" On 11/26/2013 04:46 PM, Paolo Bonzini wrote: > Il 26/11/2013 15:36, Avi Kivity ha scritto: >> No, this would be exactly the same code that is running now: >> >> mutex_lock(&kvm->irq_lock); >> old = kvm->irq_routing; >> kvm_irq_routing_update(kvm, new); >> mutex_unlock(&kvm->irq_lock); >> >> synchronize_rcu(); >> kfree(old); >> return 0; >> >> Except that the kfree would run in the call_rcu kernel thread instead of >> the vcpu thread. But the vcpus already see the new routing table after >> the rcu_assign_pointer that is in kvm_irq_routing_update. >> >> I understood the proposal was also to eliminate the synchronize_rcu(), >> so while new interrupts would see the new routing table, interrupts >> already in flight could pick up the old one. > Isn't that always the case with RCU? (See my answer above: "the vcpus > already see the new routing table after the rcu_assign_pointer that is > in kvm_irq_routing_update"). With synchronize_rcu(), you have the additional guarantee that any parallel accesses to the old routing table have completed. Since we also trigger the irq from rcu context, you know that after synchronize_rcu() you won't get any interrupts to the old destination (see kvm_set_irq_inatomic()). It's another question whether the hardware provides the same guarantee. > If you eliminate the synchronize_rcu, new interrupts would see the new > routing table, while interrupts already in flight will get a dangling > pointer. Sure, if you drop the synchronize_rcu(), you have to add call_rcu().