From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 32EC2C433FE for ; Fri, 21 Oct 2022 13:17:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230319AbiJUNR1 (ORCPT ); Fri, 21 Oct 2022 09:17:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53844 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230195AbiJUNRY (ORCPT ); Fri, 21 Oct 2022 09:17:24 -0400 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D95425A15C; Fri, 21 Oct 2022 06:17:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666358233; x=1697894233; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=67TEJpud3tBGYj0e5MnovwOYbbICjK4y1xG4dR2yYi0=; b=JSYW4dpHngrcegdk8AZLnsRos+91tnLMDlRdnSTKbRdPmNxdiFPU4lzW 9U5elHUjC8vg2LS0PskXXC6hgJGnIK1doYvbDTnzgZVWjxEUhzkoV+KEP y0QDgqSQQU2hs3cB3Gad2FcjWo+g4HwuKW+Qagj7NMe5zzyzDjew/+/tK iKo7TUAWSZwfsYGl3rp1RQqwP8outde+9u/7sITKtZUWD1X/6S1hoxhxr NIIpku2y5hyH+jXzoFxgp6SzhNbaGAksTZ0vrCyvFqtiUGk+uitru3zUC gHcIB+9kH6gWPfm+GFLgUvna05nYU5O1fQ8jAr54MPbj43HL6ZwncmPXa g==; X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="305736330" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="305736330" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2022 06:16:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10507"; a="699338736" X-IronPort-AV: E=Sophos;i="5.95,200,1661842800"; d="scan'208";a="699338736" Received: from smile.fi.intel.com ([10.237.72.54]) by fmsmga004.fm.intel.com with ESMTP; 21 Oct 2022 06:16:20 -0700 Received: from andy by smile.fi.intel.com with local (Exim 4.96) (envelope-from ) id 1olrsn-00BAgP-0n; Fri, 21 Oct 2022 16:16:17 +0300 Date: Fri, 21 Oct 2022 16:16:17 +0300 From: Andy Shevchenko To: Valentin Schneider Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Saeed Mahameed , Leon Romanovsky , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Yury Norov , Rasmus Villemoes , Ingo Molnar , Peter Zijlstra , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Mel Gorman , Greg Kroah-Hartman , Heiko Carstens , Tony Luck , Jonathan Cameron , Gal Pressman , Tariq Toukan , Jesse Brandeburg Subject: Re: [PATCH v5 2/3] sched/topology: Introduce for_each_numa_hop_mask() Message-ID: References: <20221021121927.2893692-1-vschneid@redhat.com> <20221021121927.2893692-3-vschneid@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20221021121927.2893692-3-vschneid@redhat.com> Organization: Intel Finland Oy - BIC 0357606-4 - Westendinkatu 7, 02160 Espoo Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org On Fri, Oct 21, 2022 at 01:19:26PM +0100, Valentin Schneider wrote: > The recently introduced sched_numa_hop_mask() exposes cpumasks of CPUs > reachable within a given distance budget, wrap the logic for iterating over > all (distance, mask) values inside an iterator macro. ... > #ifdef CONFIG_NUMA > -extern const struct cpumask *sched_numa_hop_mask(int node, int hops); > +extern const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops); > #else > -static inline const struct cpumask *sched_numa_hop_mask(int node, int hops) > +static inline const struct cpumask * > +sched_numa_hop_mask(unsigned int node, unsigned int hops) > { > - if (node == NUMA_NO_NODE && !hops) > - return cpu_online_mask; > - > return ERR_PTR(-EOPNOTSUPP); > } > #endif /* CONFIG_NUMA */ I didn't get how the above two changes are related to the 3rd one which introduces a for_each type of macro. If you need change int --> unsigned int, perhaps it can be done in a separate patch. The change inside inliner I dunno about. Not an expert. ... > +#define for_each_numa_hop_mask(mask, node) \ > + for (unsigned int __hops = 0; \ > + /* \ > + * Unsightly trickery required as we can't both initialize \ > + * @mask and declare __hops in for()'s first clause \ > + */ \ > + mask = __hops > 0 ? mask : \ > + node == NUMA_NO_NODE ? \ > + cpu_online_mask : sched_numa_hop_mask(node, 0), \ > + !IS_ERR_OR_NULL(mask); \ > + __hops++, \ > + mask = sched_numa_hop_mask(node, __hops)) This can be unified with conditional, see for_each_gpio_desc_with_flag() as example how. -- With Best Regards, Andy Shevchenko