From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 279A52E285C for ; Wed, 26 Nov 2025 09:36:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=217.140.110.172 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764149803; cv=none; b=bnYZ5bgZyo7iV1ZSoJwi8cUxE2RX6FNrrQymbdH/2P04u6bbW7WjYsJKC9wa8cTBF8KKLiT2j5uPcxgwX9m+IltorUwbHm4xZSOBXDfbcYN+Zg00nKVq2b6N4vvY7zuCzVLJThf7qf2LUn8BJ9GcC261GHHJlGjpLSY0LV5djPU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764149803; c=relaxed/simple; bh=nuPiTbGk5l759sdAHVdLEMV4zM/qCWsAPJ5FgwY9KtI=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Bb6pufaXtKVi4PNhP3YFnlOGXBH8cvUoNx8izyJg/smgOhWBg9YWyMD83Y29WRJhiaOLcUW0DfxrIAMLfX2VI3RXMosB0G5s4m/yoDstBOtgCJKyDcPfJr9xOGVXCmN/8dRUJtEbOfsRhsfC4Tn95WcILQrGt3vcxCqfHAulQHw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com; spf=pass smtp.mailfrom=arm.com; arc=none smtp.client-ip=217.140.110.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=arm.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AF9EA168F; Wed, 26 Nov 2025 01:36:33 -0800 (PST) Received: from J2N7QTR9R3 (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 665E93F6A8; Wed, 26 Nov 2025 01:36:39 -0800 (PST) Date: Wed, 26 Nov 2025 09:36:32 +0000 From: Mark Rutland To: Seongsu Park Cc: will@kernel.org, peterz@infradead.org, boqun.feng@gmail.com, gary@garyguo.net, catalin.marinas@arm.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Subject: Re: [PATCH] arm64: atomics: lse: Remove unused parameters from ATOMIC_FETCH_OP_AND macros Message-ID: References: <20251126021025.3239562-1-sgsu.park@samsung.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251126021025.3239562-1-sgsu.park@samsung.com> On Wed, Nov 26, 2025 at 11:10:25AM +0900, Seongsu Park wrote: > The ATOMIC_FETCH_OP_AND and ATOMIC64_FETCH_OP_AND macros accept 'mb' and > 'cl' parameters but never use them in their implementation. These macros > simply delegate to the corresponding andnot functions, which handle the > actual atomic operations and memory barriers. > > Signed-off-by: Seongsu Park FWIW, this was a leftover from commit: 5e9e43c987b2 ("arm64: atomics: lse: define ANDs in terms of ANDNOTs") ... where I missed the leftover macro arguments. AFAICT there aren't any other leftover macro arguments from that round of asm improvments (or otherwise), so: Acked-by: Mark Rutland Mark. > --- > arch/arm64/include/asm/atomic_lse.h | 20 ++++++++++---------- > 1 file changed, 10 insertions(+), 10 deletions(-) > > diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h > index 87f568a94e55..afad1849c4cf 100644 > --- a/arch/arm64/include/asm/atomic_lse.h > +++ b/arch/arm64/include/asm/atomic_lse.h > @@ -103,17 +103,17 @@ static __always_inline void __lse_atomic_and(int i, atomic_t *v) > return __lse_atomic_andnot(~i, v); > } > > -#define ATOMIC_FETCH_OP_AND(name, mb, cl...) \ > +#define ATOMIC_FETCH_OP_AND(name) \ > static __always_inline int \ > __lse_atomic_fetch_and##name(int i, atomic_t *v) \ > { \ > return __lse_atomic_fetch_andnot##name(~i, v); \ > } > > -ATOMIC_FETCH_OP_AND(_relaxed, ) > -ATOMIC_FETCH_OP_AND(_acquire, a, "memory") > -ATOMIC_FETCH_OP_AND(_release, l, "memory") > -ATOMIC_FETCH_OP_AND( , al, "memory") > +ATOMIC_FETCH_OP_AND(_relaxed) > +ATOMIC_FETCH_OP_AND(_acquire) > +ATOMIC_FETCH_OP_AND(_release) > +ATOMIC_FETCH_OP_AND( ) > > #undef ATOMIC_FETCH_OP_AND > > @@ -210,17 +210,17 @@ static __always_inline void __lse_atomic64_and(s64 i, atomic64_t *v) > return __lse_atomic64_andnot(~i, v); > } > > -#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ > +#define ATOMIC64_FETCH_OP_AND(name) \ > static __always_inline long \ > __lse_atomic64_fetch_and##name(s64 i, atomic64_t *v) \ > { \ > return __lse_atomic64_fetch_andnot##name(~i, v); \ > } > > -ATOMIC64_FETCH_OP_AND(_relaxed, ) > -ATOMIC64_FETCH_OP_AND(_acquire, a, "memory") > -ATOMIC64_FETCH_OP_AND(_release, l, "memory") > -ATOMIC64_FETCH_OP_AND( , al, "memory") > +ATOMIC64_FETCH_OP_AND(_relaxed) > +ATOMIC64_FETCH_OP_AND(_acquire) > +ATOMIC64_FETCH_OP_AND(_release) > +ATOMIC64_FETCH_OP_AND( ) > > #undef ATOMIC64_FETCH_OP_AND > > -- > 2.34.1 >