From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5D97B2DE6FF for ; Mon, 23 Mar 2026 18:45:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.18 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774291507; cv=none; b=keuCo9zD2i6sqvBxxAKeB1ii+8FjnoI9694fwvPwZn6mY8ySqiY1wrHoDccPqaAkr6ewgOVn1gAbItqrMGf7kgpkA193FZJ+nPZ/DpFfnU8HcQpsJHGXrJEE1Sd5gKRe4UJWcWayGNjP8zSH4dA4Rtsf9QTW4F6LOlgxDQ2qh30= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774291507; c=relaxed/simple; bh=ImtgCEsJGk79lUAyidOeC4U2UqBPFnw9Ny+zjXUOYA0=; h=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References: Content-Type:MIME-Version; b=qIakio+luzdvFcpa4a2GQQIZhzcQKJhvCO7QRv1iyN56fPWpQBHSK6iAtHTYtLNeNAIDQ6b1IWvXD/dLELM83RFFWZ51TqTneWLrKg+2lm82czPJ0AvGiNpwYwX64n6ZV/opGvMiW5KjTJ0wPCWVh+9VVDYnlcdvsrfss+3h81s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=UpvDn3ME; arc=none smtp.client-ip=198.175.65.18 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="UpvDn3ME" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1774291503; x=1805827503; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=ImtgCEsJGk79lUAyidOeC4U2UqBPFnw9Ny+zjXUOYA0=; b=UpvDn3ME1j+pkeLIY+z10xaomXKAzb+LyTVrru/HnIaStNydPOLTEs+q 0H24vjD6NARVRTpgoXuQY4tgdKs+VueuafhvNeJioN2bWgC5T8IJeapzO y8beNdbJfDDm8LDAOkuvF13DNrsMuFKL3EWNwkMQmwPjyfusAQ3FFQUZG 3ePcn6ISuCFPt6jve8bNn8gZU/g9mpGKT/YeOR22izDjg9bANA8KrO8WJ 8+JJCpayTypX+8yxj4EiD5ZMoQ4aLCydXUx2tCFI6AMNaUj+/pI5gajJk ADiEjXxH6c5a33QtLyw2kPtf3kKF+Aza3KTs/P74Nlhb4iG5fvQsF7uqr g==; X-CSE-ConnectionGUID: nSRhN1VSQLa2X5HbkUl1rw== X-CSE-MsgGUID: XChiV8L1SeW9/52cnDVJtA== X-IronPort-AV: E=McAfee;i="6800,10657,11738"; a="75319775" X-IronPort-AV: E=Sophos;i="6.23,137,1770624000"; d="scan'208";a="75319775" Received: from orviesa004.jf.intel.com ([10.64.159.144]) by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2026 11:45:03 -0700 X-CSE-ConnectionGUID: RLFQ3gwwRPWhjf92+Lcu9A== X-CSE-MsgGUID: xUuERbRCTWWSEWNwvPeS7g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,137,1770624000"; d="scan'208";a="228593802" Received: from schen9-mobl4.amr.corp.intel.com (HELO [10.125.109.221]) ([10.125.109.221]) by orviesa004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2026 11:45:02 -0700 Message-ID: <63a095f02428700a7ff2623b8ea81e524a406834.camel@linux.intel.com> Subject: Re: [PATCH v2 4/4] sched/rt: Split cpupri_vec->cpumask to per NUMA node to reduce contention From: Tim Chen To: Peter Zijlstra , Pan Deng Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, tianyou.li@intel.com, yu.c.chen@intel.com Date: Mon, 23 Mar 2026 11:45:01 -0700 In-Reply-To: <20260320124003.GU3738786@noisy.programming.kicks-ass.net> References: <20260320124003.GU3738786@noisy.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.58.1 (3.58.1-1.fc43) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On Fri, 2026-03-20 at 13:40 +0100, Peter Zijlstra wrote: > On Mon, Jul 21, 2025 at 02:10:26PM +0800, Pan Deng wrote: >=20 > > This change splits `cpupri_vec->cpumask` into per-NUMA-node data to > > mitigate false sharing. >=20 > So I really do think we need something here. We're running into the > whole cpumask contention thing on a semi regular basis. >=20 > But somehow I doubt this is it. >=20 > I would suggest building a radix tree like structure based on ACPIID > -- which is inherently suitable for this given that is exactly how > CPUID-0b/1f are specified. >=20 Are you thinking about replacing cpumask in cpupri_vec with something like = xarray? And a question on using ACPIID for the CPU as index instead of CPUID.=C2=A0 Is it because you want to even out access in the tree? Tim > This of course makes it very much x86 specific, but perhaps other > architectures can provide similarly structured id spaces suitable for > this. >=20 > If you make it so that it reduces to a single large level (equivalent to > the normal bitmaps) when no intermediate masks are specific, it should > work for all, and then architectures can opt-in by providing a suitable > id space and masks.