From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Paul E. McKenney" Subject: Re: [PATCH v3 7/8] Move IO APIC to its own lock. Date: Thu, 13 Aug 2009 08:15:45 -0700 Message-ID: <20090813151544.GA6744@linux.vnet.ibm.com> References: <1250079442-5163-1-git-send-email-gleb@redhat.com> <1250079442-5163-8-git-send-email-gleb@redhat.com> <20090813091330.GB17022@amt.cnet> <20090813094810.GB23736@redhat.com> <4A83E1B9.3040508@redhat.com> <20090813100928.GC23736@redhat.com> <4A83EE76.9020001@redhat.com> Reply-To: paulmck@linux.vnet.ibm.com Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Gleb Natapov , Marcelo Tosatti , kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from e4.ny.us.ibm.com ([32.97.182.144]:43407 "EHLO e4.ny.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750932AbZHMPPp (ORCPT ); Thu, 13 Aug 2009 11:15:45 -0400 Received: from d01relay02.pok.ibm.com (d01relay02.pok.ibm.com [9.56.227.234]) by e4.ny.us.ibm.com (8.14.3/8.13.1) with ESMTP id n7DF9ZDk029118 for ; Thu, 13 Aug 2009 11:09:35 -0400 Received: from d01av04.pok.ibm.com (d01av04.pok.ibm.com [9.56.224.64]) by d01relay02.pok.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id n7DFFkip247340 for ; Thu, 13 Aug 2009 11:15:46 -0400 Received: from d01av04.pok.ibm.com (loopback [127.0.0.1]) by d01av04.pok.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id n7DFFjZs013061 for ; Thu, 13 Aug 2009 11:15:45 -0400 Content-Disposition: inline In-Reply-To: <4A83EE76.9020001@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Aug 13, 2009 at 01:44:06PM +0300, Avi Kivity wrote: > On 08/13/2009 01:09 PM, Gleb Natapov wrote: >>> There's also srcu. >>> >> What are the disadvantages? There should be some, otherwise why not use >> it all the time. > > I think it incurs an atomic op in the read path, but not much overhead > otherwise. Paul? There are not atomic operations in srcu_read_lock(): int srcu_read_lock(struct srcu_struct *sp) { int idx; preempt_disable(); idx = sp->completed & 0x1; barrier(); /* ensure compiler looks -once- at sp->completed. */ per_cpu_ptr(sp->per_cpu_ref, smp_processor_id())->c[idx]++; srcu_barrier(); /* ensure compiler won't misorder critical section. */ preempt_enable(); return idx; } There is a preempt_disable() and a preempt_enable(), which non-atomically manipulate a field in the thread_info structure. There is a barrier() and an srcu_barrier(), which are just compiler directives (no code generated). Other than that, simple arithmetic and array accesses. Shouldn't even be any cache misses in the common case (the uncommon case being where synchronize_srcu() executing on some other CPU). There is even less in srcu_read_unlock(): void srcu_read_unlock(struct srcu_struct *sp, int idx) { preempt_disable(); srcu_barrier(); /* ensure compiler won't misorder critical section. */ per_cpu_ptr(sp->per_cpu_ref, smp_processor_id())->c[idx]--; preempt_enable(); } So SRCU should have pretty low overhead. And, as with other forms of RCU, legal use of the read-side primitives cannot possibly participate in deadlocks. So, to answer the question above, what are the disadvantages? o On the update side, synchronize_srcu() does takes some time, mostly blocking in synchronize_sched(). So, like other forms of RCU, you would use SRCU in read-mostly situations. o Just as with RCU, reads and updates run concurrently, with all the good and bad that this implies. For an example of the good, srcu_read_lock() executes deterministically, no blocking or spinning. For an example of the bad, there is no way to shut down SRCU readers. These are opposite sides of the same coin. ;-) o Although srcu_read_lock() and srcu_read_unlock() are light weight, they are expensive compared to other forms of RCU. o In contrast to other forms of RCU, SRCU requires that the return value from srcu_read_lock() be passed into srcu_read_unlock(). Usually not a problem, but does place another constraint on the code. Please keep in mind that I have no idea about what you are thinking of using SRCU for, so the above advice is necessarily quite generic. ;-) Thanx, Paul