From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60192) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1awLmE-0008R8-NI for qemu-devel@nongnu.org; Fri, 29 Apr 2016 23:41:12 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1awLlt-0002BH-F7 for qemu-devel@nongnu.org; Fri, 29 Apr 2016 23:41:01 -0400 Received: from out5-smtp.messagingengine.com ([66.111.4.29]:37216) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1awLlr-00027u-7A for qemu-devel@nongnu.org; Fri, 29 Apr 2016 23:40:45 -0400 Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 3C2E120A55 for ; Fri, 29 Apr 2016 23:40:36 -0400 (EDT) Date: Fri, 29 Apr 2016 23:40:35 -0400 From: "Emilio G. Cota" Message-ID: <20160430034035.GA31609@flamenco> References: <20160425152528.GA16402@flamenco> <1461627983-32563-1-git-send-email-cota@braap.org> <87shy8ev7c.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <87shy8ev7c.fsf@linaro.org> Subject: Re: [Qemu-devel] [RFC v3] translate-all: protect code_gen_buffer with RCU List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex =?iso-8859-1?Q?Benn=E9e?= Cc: QEMU Developers , MTTCG Devel , Paolo Bonzini , Peter Crosthwaite , Richard Henderson , Sergey Fedorov On Tue, Apr 26, 2016 at 07:32:39 +0100, Alex Bennée wrote: > Emilio G. Cota writes: > > With two code_gen "halves", if two tb_flush calls are done in the same > > RCU read critical section, we're screwed. I added a cpu_exit at the end > > of tb_flush to try to mitigate this, but I haven't audited all the callers > > (for instance, what does the gdbstub do?). > > I'm not sure we are going to get much from this approach. The tb_flush > is a fairly rare occurrence its not like its on the critical performance > path (although of course pathological cases are possible). This is what I thought from the beginning, but wanted to give this alternative a go anyway to see if it was feasible. On my end I won't do any more work on this approach. Will go back to locks, despite Paolo's (justified) dislike for them =) > > If we end up having a mechanism to "stop all CPUs to do something", as > > I think we'll end up needing for correct LL/SC emulation, we'll probably > > be better off using that mechanism for tb_flush as well -- plus, we'll avoid > > wasting memory. > > I'm fairly certain there will need to be a "stop everything" mode for > some things - I'm less certain of the best way of doing it. Did you get > a chance to look at my version of the async_safe_work mechanism? Not yet, but will get to it very soon. Cheers, Emilio