From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9B3EC433EF for ; Mon, 6 Jun 2022 20:59:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233100AbiFFU7z (ORCPT ); Mon, 6 Jun 2022 16:59:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230009AbiFFU7s (ORCPT ); Mon, 6 Jun 2022 16:59:48 -0400 Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com [IPv6:2607:f8b0:4864:20::72f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8CE02939F0 for ; Mon, 6 Jun 2022 13:48:58 -0700 (PDT) Received: by mail-qk1-x72f.google.com with SMTP id 15so6617177qki.6 for ; Mon, 06 Jun 2022 13:48:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=g2dcvEx5ujIfq/vGwsQoEKMbkEYSqcC+pGjcA0dBu1c=; b=f7b0v5dv8hmLnBlzL8WyYq5cOW64Y5WX7Dmpl69JKaPauL++ld78WKF8ntff7E3FD9 /EFPnKoGaHbNkPIgpiRh5FlgefHuwLc1yxNkOW+bHut9SVnVZCbu/JkykiPElo5zPWjp nCuHLbsnuv21eFXD2iNnywm2Hbn427FK25GFy6T1JFuumczjXmgtp3nChf3HO/XhNBhM NXjSU4JD0/Av3t8AAMTU1yL794U7NXS8pa6qxIcfy7ouTJCp/SF65MboQcoKMXA5+/fk R0q5tIVK+/ksEV405xlxflOCs1qdnp/FFVTP6I8k8VtwErnFZfCokgUB0civbDGx+Ib2 7GIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=g2dcvEx5ujIfq/vGwsQoEKMbkEYSqcC+pGjcA0dBu1c=; b=QETrCLInkPdxqQZOB7R2N5yf5Ii/ql2u/3V69Zls/lWgCbowUN1gBAAf1MqJq4+JQS hEz6PcGdkHa5q0MlBE36jjQmyps+0l81ah253IHD+OWZV5GN21VtshFw9M3uUX6wYsHH hQFrknRLskHTCP+3BevfW6hsqTSbpitjNMz3ifG6UGdrYz4J6MU4t8yIGYPgDSleM9gX D7QAAlIlUvYv2oYZmckNN3uJbJbyNL3mlkiNuK/2eDsYAAA8bbc4spAFf8p8SYiX3zSx XEQYv+hmuS/0uYWn81uK8tTTIJh2vzRIxJFVY6ULOxTF96WW4z3eQZKBgxfmMdIQzrtb yMnQ== X-Gm-Message-State: AOAM531VBFr1rHULcZ3kG0ON4rgnweHAxvv4xirlamQkx1GuhKeUtgWX 44zhQmHYz2C0zf6K29xvcJ4= X-Google-Smtp-Source: ABdhPJy6iWrSjhT1+WPfkKbI1gHvATNgUCtpwq8y3JhsiD76yT4Gvelb8RZZw4VB5N3/itiqM6ja8g== X-Received: by 2002:a05:620a:14ad:b0:6a6:b8ab:9c3f with SMTP id x13-20020a05620a14ad00b006a6b8ab9c3fmr5227234qkj.410.1654548537353; Mon, 06 Jun 2022 13:48:57 -0700 (PDT) Received: from localhost ([2601:4c1:c100:1230:be9c:b2d9:3353:7a73]) by smtp.gmail.com with ESMTPSA id bm32-20020a05620a19a000b006a6d20386f6sm900223qkb.42.2022.06.06.13.48.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 06 Jun 2022 13:48:56 -0700 (PDT) Date: Mon, 6 Jun 2022 13:48:50 -0700 From: Yury Norov To: Alexander Lobakin Cc: Arnd Bergmann , Andy Shevchenko , Richard Henderson , Matt Turner , Brian Cain , Geert Uytterhoeven , Yoshinori Sato , Rich Felker , "David S. Miller" , Kees Cook , "Peter Zijlstra (Intel)" , Marco Elver , Borislav Petkov , Tony Luck , Greg Kroah-Hartman , linux-alpha@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 4/6] bitops: unify non-atomic bitops prototypes across architectures Message-ID: References: <20220606114908.962562-1-alexandr.lobakin@intel.com> <20220606114908.962562-5-alexandr.lobakin@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220606114908.962562-5-alexandr.lobakin@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-m68k@vger.kernel.org On Mon, Jun 06, 2022 at 01:49:05PM +0200, Alexander Lobakin wrote: > Currently, there is a mess with the prototypes of the non-atomic > bitops across the different architectures: > > ret bool, int, unsigned long > nr int, long, unsigned int, unsigned long > addr volatile unsigned long *, volatile void * > > Thankfully, it doesn't provoke any bugs, but can sometimes make > the compiler angry when it's not handy at all. > Adjust all the prototypes to the following standard: > > ret bool retval can be only 0 or 1 > nr unsigned long native; signed makes no sense > addr volatile unsigned long * bitmaps are arrays of ulongs > > Finally, add some static assertions in order to prevent people from > making a mess in this room again. > I also used the %__always_inline attribute consistently they always > get resolved to the actual operations. > > Suggested-by: Andy Shevchenko > Signed-off-by: Alexander Lobakin > --- Reviewed-by: Yury Norov [...] > diff --git a/include/linux/bitops.h b/include/linux/bitops.h > index 7aaed501f768..5520ac9b1c24 100644 > --- a/include/linux/bitops.h > +++ b/include/linux/bitops.h > @@ -26,12 +26,25 @@ extern unsigned int __sw_hweight16(unsigned int w); > extern unsigned int __sw_hweight32(unsigned int w); > extern unsigned long __sw_hweight64(__u64 w); > > +#include > + > /* > * Include this here because some architectures need generic_ffs/fls in > * scope > */ > #include > > +/* Check that the bitops prototypes are sane */ > +#define __check_bitop_pr(name) static_assert(__same_type(name, gen_##name)) > +__check_bitop_pr(__set_bit); > +__check_bitop_pr(__clear_bit); > +__check_bitop_pr(__change_bit); > +__check_bitop_pr(__test_and_set_bit); > +__check_bitop_pr(__test_and_clear_bit); > +__check_bitop_pr(__test_and_change_bit); > +__check_bitop_pr(test_bit); > +#undef __check_bitop_pr This one is amazing trick! And the series is good overall. Do you want me to take it in bitmap tree, when it's ready, or you'll move it somehow else? Thanks, Yury