From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39671) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9xAg-0001St-Hg for qemu-devel@nongnu.org; Mon, 06 Jun 2016 12:14:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b9xAd-0005tu-N9 for qemu-devel@nongnu.org; Mon, 06 Jun 2016 12:14:34 -0400 Received: from mail-lf0-x235.google.com ([2a00:1450:4010:c07::235]:36574) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b9xAd-0005tk-FO for qemu-devel@nongnu.org; Mon, 06 Jun 2016 12:14:31 -0400 Received: by mail-lf0-x235.google.com with SMTP id b73so97839571lfb.3 for ; Mon, 06 Jun 2016 09:14:31 -0700 (PDT) References: <20160531183928.29406-1-bobby.prani@gmail.com> <20160531183928.29406-2-bobby.prani@gmail.com> <57505F1A.3020808@gmail.com> <68c32d50-adc2-25b2-b136-2a486f6b3de7@twiddle.net> <5750995D.6030005@gmail.com> <8e9b8569-89a5-845a-a856-7f2fa4435659@twiddle.net> <57559A66.6040408@gmail.com> <57559B93.7090003@gmail.com> From: Sergey Fedorov Message-ID: <5755A164.9040709@gmail.com> Date: Mon, 6 Jun 2016 19:14:28 +0300 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC v2 PATCH 01/13] Introduce TCGOpcode for memory barrier List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Pranith Kumar Cc: Richard Henderson , "open list:All patches CC here" , =?UTF-8?Q?Alex_Benn=c3=a9e?= On 06/06/16 18:58, Pranith Kumar wrote: > On Mon, Jun 6, 2016 at 11:49 AM, Sergey Fedorov wrote: >> On 06/06/16 18:47, Pranith Kumar wrote: >>> On Mon, Jun 6, 2016 at 11:44 AM, Sergey Fedorov wrote: >>>> On 03/06/16 21:27, Pranith Kumar wrote: >>>>> On Thu, Jun 2, 2016 at 5:18 PM, Richard Henderson wrote: >>>>>> What if we have tcg_canonicalize_memop (or some such) split off the barriers >>>>>> into separate opcodes. E.g. >>>>>> >>>>>> MO_BAR_LD_B = 32 // prevent earlier loads from crossing current op >>>>>> MO_BAR_ST_B = 64 // prevent earlier stores from crossing current op >>>>>> MO_BAR_LD_A = 128 // prevent later loads from crossing current op >>>>>> MO_BAR_ST_A = 256 // prevent later stores from crossing current op >>>>>> MO_BAR_LDST_B = MO_BAR_LD_B | MO_BAR_ST_B >>>>>> MO_BAR_LDST_A = MO_BAR_LD_A | MO_BAR_ST_A >>>>>> MO_BAR_MASK = MO_BAR_LDST_B | MO_BAR_LDST_A >>>>>> >>>>>> // Match Sparc MEMBAR as the most flexible host. >>>>>> TCG_BAR_LD_LD = 1 // #LoadLoad barrier >>>>>> TCG_BAR_ST_LD = 2 // #StoreLoad barrier >>>>>> TCG_BAR_LD_ST = 4 // #LoadStore barrier >>>>>> TCG_BAR_ST_ST = 8 // #StoreStore barrier >>>>>> TCG_BAR_SYNC = 64 // SEQ_CST barrier >>>>> I really like this format. I would also like to add to the frontend: >>>>> >>>> Actually, the acquire barrier is a combined load-load + load-store >>>> barrier; and the release barrier is a combo of load-store + store-store >>>> barriers. >>>> >>> All the above are two-way barriers. Acquire/Release are one-way >>> barriers. So we cannot combine the above to get acquire/release >>> semantics without being conservative. >> Do you mean *barriers* or *memory access* operations implying memory >> ordering? > I meant the latter. I know no arch which has acquire/release barriers. > Sorry for the confusion. So I am. By the way, what's the difference between sequential consistent *barrier* and a combination of all the other barriers? Kind regards, Sergey