From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Lobakin Subject: Re: [PATCH v3 6/7] bitops: let optimize out non-atomic bitops on compile-time constants Date: Mon, 20 Jun 2022 15:12:23 +0200 Message-ID: <20220620131223.2627869-1-alexandr.lobakin@intel.com> References: <20220617144031.2549432-1-alexandr.lobakin@intel.com> <20220617144031.2549432-7-alexandr.lobakin@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1655730757; x=1687266757; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A26pOIWVwaxFpebW3F8OTi5WCNGWKZeMpH4yZlFPMe4=; b=Y0eb+JvxB9QvpWcLb0EldUsFeZHh9nUmlSU66q83p3Nb30b4LkH3Gt4v B6gcsNijIETJq2jV4tUS7imw/8AkQ/aVD/72ztaVBZqEPRSlOr2fDZ6Fe rY92yJZoaKfVHwk2wPK4q5rzAVOBQRwIXhIWB7DdM6/yVKU32WWYtMFjJ pB7ZzZ6fzvuiCX8LlfXHQjdBk+hJZOEbuN3InlDzl8TRdPrdyKTCLhSwD P4ER6U7DtengSUfH+npkncH69pH3etuhM3bIk/tTulFBS0V06Q2OVp+j9 cvcy2mEMhlTKmJ2JqEBKl36EuRnkIsuA7RCMW4CGBZybKzZP785lLp4Xz A==; In-Reply-To: List-ID: To: Andy Shevchenko Cc: Alexander Lobakin , Arnd Bergmann , Yury Norov , Mark Rutland , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Maciej Fijalkowski , Jesse Brandeburg , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.ker From: Andy Shevchenko Date: Mon, 20 Jun 2022 13:05:06 +0300 > On Fri, Jun 17, 2022 at 04:40:30PM +0200, Alexander Lobakin wrote: > > Currently, many architecture-specific non-atomic bitop > > implementations use inline asm or other hacks which are faster or > > more robust when working with "real" variables (i.e. fields from > > the structures etc.), but the compilers have no clue how to optimize > > them out when called on compile-time constants. That said, the > > following code: > > > > DECLARE_BITMAP(foo, BITS_PER_LONG) = { }; // -> unsigned long foo[1]; > > unsigned long bar = BIT(BAR_BIT); > > unsigned long baz = 0; > > > > __set_bit(FOO_BIT, foo); > > baz |= BIT(BAZ_BIT); > > > > BUILD_BUG_ON(!__builtin_constant_p(test_bit(FOO_BIT, foo)); > > BUILD_BUG_ON(!__builtin_constant_p(bar & BAR_BIT)); > > BUILD_BUG_ON(!__builtin_constant_p(baz & BAZ_BIT)); > > > > triggers the first assertion on x86_64, which means that the > > compiler is unable to evaluate it to a compile-time initializer > > when the architecture-specific bitop is used even if it's obvious. > > In order to let the compiler optimize out such cases, expand the > > bitop() macro to use the "constant" C non-atomic bitop > > implementations when all of the arguments passed are compile-time > > constants, which means that the result will be a compile-time > > constant as well, so that it produces more efficient and simple > > code in 100% cases, comparing to the architecture-specific > > counterparts. > > > > The savings are architecture, compiler and compiler flags dependent, > > for example, on x86_64 -O2: > > > > GCC 12: add/remove: 78/29 grow/shrink: 332/525 up/down: 31325/-61560 (-30235) > > LLVM 13: add/remove: 79/76 grow/shrink: 184/537 up/down: 55076/-141892 (-86816) > > LLVM 14: add/remove: 10/3 grow/shrink: 93/138 up/down: 3705/-6992 (-3287) > > > > and ARM64 (courtesy of Mark): > > > > GCC 11: add/remove: 92/29 grow/shrink: 933/2766 up/down: 39340/-82580 (-43240) > > LLVM 14: add/remove: 21/11 grow/shrink: 620/651 up/down: 12060/-15824 (-3764) > > ... > > > +/* > > + * Many architecture-specific non-atomic bitops contain inline asm code and due > > + * to that the compiler can't optimize them to compile-time expressions or > > + * constants. In contrary, gen_*() helpers are defined in pure C and compilers > > generic_*() ? Ah right, bah, forgot to change that in v2. Will fix in v4, as __builtin_constant_p() test from v7 triggered build bugs on ARC, will look into that. > > > + * optimize them just well. > > + * Therefore, to make `unsigned long foo = 0; __set_bit(BAR, &foo)` effectively > > + * equal to `unsigned long foo = BIT(BAR)`, pick the generic C alternative when > > + * the arguments can be resolved at compile time. That expression itself is a > > + * constant and doesn't bring any functional changes to the rest of cases. > > + * The casts to `uintptr_t` are needed to mitigate `-Waddress` warnings when > > + * passing a bitmap from .bss or .data (-> `!!addr` is always true). > > + */ > > -- > With Best Regards, > Andy Shevchenko Thanks, Al