From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6475C4345F for ; Thu, 25 Apr 2024 14:36:31 +0000 (UTC) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B9652402EE; Thu, 25 Apr 2024 16:36:30 +0200 (CEST) Received: from mail.lysator.liu.se (mail.lysator.liu.se [130.236.254.3]) by mails.dpdk.org (Postfix) with ESMTP id C5FFD40284 for ; Thu, 25 Apr 2024 16:36:29 +0200 (CEST) Received: from mail.lysator.liu.se (localhost [127.0.0.1]) by mail.lysator.liu.se (Postfix) with ESMTP id 2E838F88D for ; Thu, 25 Apr 2024 16:36:29 +0200 (CEST) Received: by mail.lysator.liu.se (Postfix, from userid 1004) id 22BCEF88C; Thu, 25 Apr 2024 16:36:29 +0200 (CEST) Received: from [192.168.1.59] (h-62-63-215-114.A163.priv.bahnhof.se [62.63.215.114]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mail.lysator.liu.se (Postfix) with ESMTPSA id 33A77F88B; Thu, 25 Apr 2024 16:36:27 +0200 (CEST) Message-ID: Date: Thu, 25 Apr 2024 16:36:26 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC v2 5/6] eal: add atomic bit operations To: =?UTF-8?Q?Morten_Br=C3=B8rup?= , =?UTF-8?Q?Mattias_R=C3=B6nnblom?= , dev@dpdk.org Cc: Heng Wang , Stephen Hemminger , Tyler Retzlaff References: <20240302135328.531940-2-mattias.ronnblom@ericsson.com> <20240425085853.97888-1-mattias.ronnblom@ericsson.com> <20240425085853.97888-6-mattias.ronnblom@ericsson.com> <98CBD80474FA8B44BF855DF32C47DC35E9F3E8@smartserver.smartshare.dk> Content-Language: en-US From: =?UTF-8?Q?Mattias_R=C3=B6nnblom?= In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35E9F3E8@smartserver.smartshare.dk> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Virus-Scanned: ClamAV using ClamSMTP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On 2024-04-25 12:25, Morten Brørup wrote: >> +#define rte_bit_atomic_test(addr, nr, memory_order) \ >> + _Generic((addr), \ >> + uint32_t *: __rte_bit_atomic_test32, \ >> + uint64_t *: __rte_bit_atomic_test64)(addr, nr, memory_order) > > I wonder if these should have RTE_ATOMIC qualifier: > > + RTE_ATOMIC(uint32_t) *: __rte_bit_atomic_test32, \ > + RTE_ATOMIC(uint64_t) *: __rte_bit_atomic_test64)(addr, nr, memory_order) > > >> +#define __RTE_GEN_BIT_ATOMIC_TEST(size) \ >> + static inline bool \ >> + __rte_bit_atomic_test ## size(const uint ## size ## _t *addr, \ > > I wonder if the "addr" parameter should have RTE_ATOMIC qualifier: > > + __rte_bit_atomic_test ## size(const RTE_ATOMIC(uint ## size ## _t) *addr, \ > > instead of casting into a_addr. > Check the cover letter for the rationale for the cast. Where I'm at now is that I think C11 _Atomic is rather poor design. The assumption that an object which allows for atomic access always should require all operations upon it to be atomic, regardless of where it is in its lifetime, and which thread is accessing it, does not hold, in the general case. The only reason for _Atomic being as it is, as far as I can see, is to accommodate for ISAs which does not have the appropriate atomic machine instructions, and thus require a lock or some other data associated with the actual user-data-carrying bits. Neither GCC nor DPDK supports any such ISAs, to my knowledge. I suspect neither never will. So the cast will continue to work. >> + unsigned int nr, int memory_order) \ >> + { \ >> + RTE_ASSERT(nr < size); \ >> + \ >> + const RTE_ATOMIC(uint ## size ## _t) *a_addr = \ >> + (const RTE_ATOMIC(uint ## size ## _t) *)addr; \ >> + uint ## size ## _t mask = (uint ## size ## _t)1 << nr; \ >> + return rte_atomic_load_explicit(a_addr, memory_order) & mask; \ >> + } > > > Similar considerations regarding volatile qualifier for the "once" operations. >