From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40264C43217 for ; Fri, 25 Feb 2022 17:35:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243787AbiBYRgG (ORCPT ); Fri, 25 Feb 2022 12:36:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35214 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243779AbiBYRgD (ORCPT ); Fri, 25 Feb 2022 12:36:03 -0500 Received: from ms.lwn.net (ms.lwn.net [45.79.88.28]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEAEE1BA90C for ; Fri, 25 Feb 2022 09:35:30 -0800 (PST) Received: from localhost (unknown [IPv6:2601:281:8300:104d::5f6]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ms.lwn.net (Postfix) with ESMTPSA id E3D3E2A0; Fri, 25 Feb 2022 17:35:29 +0000 (UTC) DKIM-Filter: OpenDKIM Filter v2.11.0 ms.lwn.net E3D3E2A0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lwn.net; s=20201203; t=1645810530; bh=xVDx4/GSY2zJkyT/gM/R6AZuIZFKi2N/VNg2nDidNbU=; h=From:To:Cc:Subject:In-Reply-To:References:Date:From; b=YpngwjhdB/ODio2/bgj3wh3cWagorZlvDWa3zRG/E9hD5L4S4afRgCTmOxpo3VIiK JOabr1BqSrAcu0mZSP942hBbpU+iJywDZByt9PN7ORkJPW2Inm/kJymcQd3mgwGZ8u Qtp+cxLsC5DjIVBLJPWIItPdJDkwFmDuNA3q3fk3cf56KftaaMgM9q/nIA0R39PWgZ Z1t57DOZ372qkYwh0+RqVnJZuU06khcA1XEZJdX2IWMai+yGhZN9iLcA5L9NohsiNC phOk6QuxjvauwlapB+kfqbLRkq3+YprCGSsmzmtko97nRLZWIoMOxhx48fyB7yobor 94o/rDUN7yM3Q== From: Jonathan Corbet To: Mathieu Desnoyers , Peter Zijlstra Cc: linux-kernel@vger.kernel.org, Thomas Gleixner , "Paul E . McKenney" , Boqun Feng , "H . Peter Anvin" , Paul Turner , linux-api@vger.kernel.org, Christian Brauner , Florian Weimer , David.Laight@ACULAB.COM, carlos@redhat.com, Peter Oskolkov , Mathieu Desnoyers Subject: Re: [RFC PATCH v2 09/11] sched: Introduce per memory space current virtual cpu id In-Reply-To: <20220218210633.23345-10-mathieu.desnoyers@efficios.com> References: <20220218210633.23345-1-mathieu.desnoyers@efficios.com> <20220218210633.23345-10-mathieu.desnoyers@efficios.com> Date: Fri, 25 Feb 2022 10:35:29 -0700 Message-ID: <87k0dikfxa.fsf@meer.lwn.net> MIME-Version: 1.0 Content-Type: text/plain Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Mathieu Desnoyers writes: > This feature allows the scheduler to expose a current virtual cpu id > to user-space. This virtual cpu id is within the possible cpus range, > and is temporarily (and uniquely) assigned while threads are actively > running within a memory space. If a memory space has fewer threads than > cores, or is limited to run on few cores concurrently through sched > affinity or cgroup cpusets, the virtual cpu ids will be values close > to 0, thus allowing efficient use of user-space memory for per-cpu > data structures. So I have one possibly (probably) dumb question: if I'm writing a program to make use of virtual CPU IDs, how do I know what the maximum ID will be? It seems like one of the advantages of this mechanism would be not having to be prepared for anything in the physical ID space, but is there any guarantee that the virtual-ID space will be smaller? Something like "no larger than the number of threads", say? Thanks, jon