From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90BCD37FF57 for ; Thu, 29 Jan 2026 10:03:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769681020; cv=none; b=rrZudqdAOGjUTTQvrA4VcrJJRWlXJ1NXEnqmMd3mGX8UCnxMroCfEiHX6jwSlnRwoTS/962o3VmOGh0I7Dikp0If7w8oJwmRsyqVJk0hBc/ff+emoFyh4ag5uHft4sZyRB/Hqpwqn952Vr8YslkQGyZNvcSbfGOjMGCNSVC0Aw8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769681020; c=relaxed/simple; bh=PmTSUQ+h2ZqYnbujZAT0kP3eFJi2MleWFHp5ZIEnQGI=; h=Date:From:To:Cc:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=UlUVcT+NoPYx0J+K61aw6rHQ4TKtJ+iatkREgDaSQ7c9O+1MaK4QZGR1SolV58ufwGr78VSAjWBm3A0ChbvYti7dMHJihy8zJQEJV2yGGhfT2MJCIEVwLR+GISJaIEa90U8XpWoD1bnkEoqqQFRBFmNw1mUhhoAF11/8TN6wJp8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=iSWgQ58Z; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="iSWgQ58Z" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-48068127f00so6654715e9.3 for ; Thu, 29 Jan 2026 02:03:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769681015; x=1770285815; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=TN9ZJ2CDowvxIQYtuX8i4e7sp+a9+p0zkDa4YiBDAsQ=; b=iSWgQ58Z0U9929SLxfByYbp4flccraA61BK8pk9s55iebY+wMI+EBdvWgD++QdwRsh wtmnyFWZI3xv1wQnSMdMEsUFVp29nlBMl1y8LBxV9fvYAgyKeuhFgwFoLkirjzs6Hmze 7P6YsfOovwhAzjRGjuWBjIuCWp2o2ZoT3ZfsZv2J/kn4dmSs4vxp9QEPv/7WbD8+aTiO TNw58EJAyqXme9oB3pCGa84pMUWaWSFpAJXy1+m5ZoRGav6FRbQlLEd988rM1a80xint jJc76cbaT+jdmnTtnwP1KKQhOIwxmjuKZa25VWVnjQ/4c8p3pOIqeZtiGJuf7oZUYvSz 1QdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769681015; x=1770285815; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=TN9ZJ2CDowvxIQYtuX8i4e7sp+a9+p0zkDa4YiBDAsQ=; b=AndglDEoNwMaEsMWI7c/A9VFFj/A+tIJjzGwMvhn4+GE9FLEhelQcGSp5hr9R0t/gf oVbrNjkesvozYsQsXit5C1nGY50vIaGSKUvJsX0d8swCvg3QgOuSeuIJlfGfhzE1Izak wJl2YFPQxUxUCNaDV+gNEFFP40GSsnXyaY/WL3y0Yy2IUSf1KUpJhBIi3XwPDOMrbl/w FzqAD2i0X2SCAY7reDd6DE9M6JoJ3mf+jngndlvvlACK9M52VePPRtKjefvttBzGGyU+ hOkX9153x28Y7cnQV94i+Lyw1bQ2mWkHzxAq0wJLI3lRL7eg+XSqPVFKLOamXmUyJJVh P0GA== X-Forwarded-Encrypted: i=1; AJvYcCUyxm7cq707qXzfKJugxFM0VS9mBYjuIqYqh/GGQH38A28WcAR54GbTatWd+qcu0YQim2HI9v762oyANdg=@vger.kernel.org X-Gm-Message-State: AOJu0YwZEysCfrpxgV1quts+fqwOeFx23zsv1cOtgS9ufBkM/sM7QKVw LD1YsroYozCaY3ssMvcvo4wW8mWuv3drbJdG1eAzOe5c6mV3dD3asBBXeqJyqQ== X-Gm-Gg: AZuq6aJW5FKI59IFjYn2/TpQLtMUFqu9Ma4s7bb4p3Xj48ckHv0ynqt91LCw+H/2JWx 18qGW4mvif4hZAl3cy5Y0GKFiHn07oCev2Sow6yL4dWU2bgSdoYwJYwP1OkVe2al7ya0HTtxZpt KTogfRTp6kWEcS2T8OwjxY5SK01z7yJx6ZFOZEqIF1fxzGVG5l7t9WPFHBPrwB9BP1WT2TK8isD Vqr4chOyjPBzrTUWjuA6dRzI3A2SjjN6QYq38oWRhoq2hI96oAlzZ634GnlxlOau2fGiM+U5z3o GIzOsScNY76T3VknBpSOKENskvsGtz1jAGQ1o2qKGwpgd+dHNwEqZ+DOWh9jzmIRhOdqIWFyhRD tHppddFaH0zPGrLgKeUMjXc9MQgqTCKDog2p7OrkxnB2Mp1Wn6DUm3ANuMMUWD/9HMEirDjPpoP fkvide/ABDeInrzSDAl6aRhIKFbiWxiI/0V8kjgUVf6ryxq0Eq+tgP X-Received: by 2002:a05:600c:1385:b0:479:1348:c614 with SMTP id 5b1f17b1804b1-48069c66fd7mr94238255e9.26.1769681014646; Thu, 29 Jan 2026 02:03:34 -0800 (PST) Received: from pumpkin (82-69-66-36.dsl.in-addr.zen.co.uk. [82.69.66.36]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-48066bfb59asm174245575e9.7.2026.01.29.02.03.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 29 Jan 2026 02:03:34 -0800 (PST) Date: Thu, 29 Jan 2026 10:03:32 +0000 From: David Laight To: Marco Elver Cc: Peter Zijlstra , Will Deacon , Ingo Molnar , Thomas Gleixner , Boqun Feng , Waiman Long , Bart Van Assche , llvm@lists.linux.dev, Catalin Marinas , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/3] arm64: Optimize __READ_ONCE() with CONFIG_LTO=y Message-ID: <20260129100332.500248d3@pumpkin> In-Reply-To: <20260129005645.747680-3-elver@google.com> References: <20260129005645.747680-1-elver@google.com> <20260129005645.747680-3-elver@google.com> X-Mailer: Claws Mail 4.1.1 (GTK 3.24.38; arm-unknown-linux-gnueabihf) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit On Thu, 29 Jan 2026 01:52:33 +0100 Marco Elver wrote: > Rework arm64 LTO __READ_ONCE() to improve code generation as follows: > > 1. Replace _Generic-based __unqual_scalar_typeof() with more complete > __rwonce_typeof_unqual(). This strips qualifiers from all types, not > just integer types, which is required to be able to assign (must be > non-const) to __u.__val in the non-atomic case (required for #2). > > Once our minimum compiler versions are bumped, this just becomes > TYPEOF_UNQUAL() (or typeof_unqual() should we decide to adopt C23 > naming). Sadly the fallback version of __rwonce_typeof_unqual() cannot > be used as a general TYPEOF_UNQUAL() fallback (see code comments). > > One subtle point here is that non-integer types of __val could be const > or volatile within the union with the old __unqual_scalar_typeof(), if > the passed variable is const or volatile. This would then result in a > forced load from the stack if __u.__val is volatile; in the case of > const, it does look odd if the underlying storage changes, but the > compiler is told said member is "const" -- it smells like UB. > > 2. Eliminate the atomic flag and ternary conditional expression. Move > the fallback volatile load into the default case of the switch, > ensuring __u is unconditionally initialized across all paths. > The statement expression now unconditionally returns __u.__val. > ... > Signed-off-by: Marco Elver > --- > v2: > * Add __rwonce_typeof_unqual() as fallback for old compilers. > --- > arch/arm64/include/asm/rwonce.h | 24 ++++++++++++++++++++---- > 1 file changed, 20 insertions(+), 4 deletions(-) > > diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h > index fc0fb42b0b64..712de3238f9a 100644 > --- a/arch/arm64/include/asm/rwonce.h > +++ b/arch/arm64/include/asm/rwonce.h > @@ -19,6 +19,23 @@ > "ldapr" #sfx "\t" #regs, \ > ARM64_HAS_LDAPR) > > +#ifdef USE_TYPEOF_UNQUAL > +#define __rwonce_typeof_unqual(x) TYPEOF_UNQUAL(x) > +#else > +/* > + * Fallback for older compilers to infer an unqualified type. > + * > + * Uses the fact that auto is supposed to drop qualifiers. Unlike Maybe: In all versions of clang 'auto' correctly drops qualifiers. A reminder in here that this is clang only might also clarify things. > + * typeof_unqual(), the type must be complete (defines an unevaluated local > + * variable); this must trivially hold because __READ_ONCE() returns a value. Not sure that is needed. > + * > + * Another caveat is that because of array-to-pointer decay, an array is > + * inferred as a pointer type; this is fine for __READ_ONCE usage, but is > + * unsuitable as a general fallback implementation for TYPEOF_UNQUAL. gcc < 11.0 stops it being used elsewhere. Something shorter? The arrary-to-pointer decay doesn't matter here. David > + */ > +#define __rwonce_typeof_unqual(x) typeof(({ auto ____t = (x); ____t; })) > +#endif > + > /* > * When building with LTO, there is an increased risk of the compiler > * converting an address dependency headed by a READ_ONCE() invocation > @@ -32,8 +49,7 @@ > #define __READ_ONCE(x) \ > ({ \ > typeof(&(x)) __x = &(x); \ > - int atomic = 1; \ > - union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \ > + union { __rwonce_typeof_unqual(*__x) __val; char __c[1]; } __u; \ > switch (sizeof(x)) { \ > case 1: \ > asm volatile(__LOAD_RCPC(b, %w0, %1) \ > @@ -56,9 +72,9 @@ > : "Q" (*__x) : "memory"); \ > break; \ > default: \ > - atomic = 0; \ > + __u.__val = *(volatile typeof(*__x) *)__x; \ > } \ > - atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(*__x) *)__x);\ > + __u.__val; \ > }) > > #endif /* !BUILD_VDSO */