From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACFB0ECAAD3 for ; Mon, 5 Sep 2022 18:33:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231786AbiIESdm (ORCPT ); Mon, 5 Sep 2022 14:33:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232009AbiIESdi (ORCPT ); Mon, 5 Sep 2022 14:33:38 -0400 Received: from mail-vk1-xa35.google.com (mail-vk1-xa35.google.com [IPv6:2607:f8b0:4864:20::a35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4426F52824; Mon, 5 Sep 2022 11:33:37 -0700 (PDT) Received: by mail-vk1-xa35.google.com with SMTP id g185so4463752vkb.13; Mon, 05 Sep 2022 11:33:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date; bh=mdiYTElc0N7vxtlgVoJ4ru4DWj2kj5bPdL6LfLpy0aM=; b=D8oBhUif2Jjk678o2fHLYtmnpB6UYZ7abkmoEbTSB7gYd+aR7NJTzbHizWX0tEsttg 94prx+9nmPlCZdE5toZMsp430z+gngCXzzQ8F0xiJjKACppBxdxBLTlqIcPPacfPndOw Z0Lqt285BGObRHpVzemZKZ/kraVoHaDn4JSYqvYh6Nh5r6wWFZtrxjWA7nvig9Ud7IFJ xcs8Pv7moCtVzQwibbdhdRJ3R/sMYt1epwFDFt15E3PejndhkY4f8QjW6Ee3NYrLE9xB xToTfryH++k4nKcj4DdExkema16S3GQs73HBYq5kYCwNzB+TI5x6OI0EPu/Ajcdi79z+ Fb6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date; bh=mdiYTElc0N7vxtlgVoJ4ru4DWj2kj5bPdL6LfLpy0aM=; b=2P0ekqFKsU5YNn8p+z0Rx8wNWlOXWkrKM+FNLCzVgXQEoOoTVmgrEUZ/Ky1wQqR1EX kAyjkAj9+vMO7todrk0aUW/K1TRXcw39GPg+sbM9XKZ+7T2z0Ewp+mBYaT1xi2HERjVO htRntOAxGpRqpQgahUOailp21hGAsoFvp2N1gCPFfqOh669yMBPX2lcdUBiVBTsqbYKY K5I/gIjwUquH/1b0KDhEOZJ9oM3EC3RIV6wia6ZtjRizwtJ+Tl94K+cX54mOcpEV9XhX aJRaDcNcGZIlp5FHna2utJWk/yI4fXN+dZlQiGV5rxMJRVWDpzdVUFglYNdoJA+2QTxs Jf1A== X-Gm-Message-State: ACgBeo3wNE928k6BFnDzmFMJb8tzRldqDiv7lGsMFAvdsPxKDTd9FZKV 7NtEZXBxy970o/umH51NMC689hrDidLA5cjfmCvpA9F5 X-Google-Smtp-Source: AA6agR5O+agHArA/yt3Tr7w/AexHtBdz7ROCfVnT2AWabzEmYu4CO6i9d2nt4ec+fPUaR8DJL5zcLXKUYod+baz4ttw= X-Received: by 2002:a05:6122:2212:b0:374:2fb5:19ef with SMTP id bb18-20020a056122221200b003742fb519efmr13937297vkb.2.1662402816175; Mon, 05 Sep 2022 11:33:36 -0700 (PDT) MIME-Version: 1.0 References: <20220825181210.284283-1-vschneid@redhat.com> <20220825181210.284283-5-vschneid@redhat.com> In-Reply-To: From: Yury Norov Date: Mon, 5 Sep 2022 11:33:24 -0700 Message-ID: Subject: Re: [PATCH v3 4/9] cpumask: Introduce for_each_cpu_andnot() To: Valentin Schneider Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Saeed Mahameed , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Andy Shevchenko , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Mel Gorman , Greg Kroah-Hartman , Heiko Carstens , Tony Luck , Jonathan Cameron , Gal Pressman , Tariq Toukan , Jesse Brandeburg Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Mon, Sep 5, 2022 at 9:44 AM Valentin Schneider wrote: > > On 25/08/22 14:14, Yury Norov wrote: > > On Thu, Aug 25, 2022 at 07:12:05PM +0100, Valentin Schneider wrote: > >> +#define for_each_cpu_andnot(cpu, mask1, mask2) \ > >> + for ((cpu) = -1; \ > >> + (cpu) = cpumask_next_andnot((cpu), (mask1), (mask2)), \ > >> + (cpu) < nr_cpu_ids;) > > > > The standard doesn't guarantee the order of execution of last 2 lines, > > so you might end up with unreliable code. Can you do it in a more > > conventional style: > > #define for_each_cpu_andnot(cpu, mask1, mask2) \ > > for ((cpu) = cpumask_next_andnot(-1, (mask1), (mask2)); \ > > (cpu) < nr_cpu_ids; \ > > (cpu) = cpumask_next_andnot((cpu), (mask1), (mask2))) > > > > IIUC the order of execution *is* guaranteed as this is a comma operator, > not argument passing: > > 6.5.17 Comma operator > > The left operand of a comma operator is evaluated as a void expression; > there is a sequence point after its evaluation. Then the right operand is > evaluated; the result has its type and value. > > for_each_cpu{_and}() uses the same pattern (which I simply copied here). > > Still, I'd be up for making this a bit more readable. I did a bit of > digging to figure out how we ended up with that pattern, and found > > 7baac8b91f98 ("cpumask: make for_each_cpu_mask a bit smaller") > > so this appears to have been done to save up on generated instructions. > *if* it is actually OK standard-wise, I'd vote to leave it as-is. Indeed. I probably messed with ANSI C. Sorry for the noise.