From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F29C73B9DB7 for ; Mon, 23 Mar 2026 18:10:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.9 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774289415; cv=none; b=laFE5lheLE1EStI1qBpIpeCQNPRAixd4vx2pS7ZM3M2Bp/BkwYoDBIksTBWYcyPJpebzjB0YUEQH3oYQDaX9FgAU1R+UcpSeQlX2ILI4g683x56THmqh+PnpsjaNgvWq60yvfzsBX63H9x/XzU4dEyBWUHGKOq13hGfDCBOYVgk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774289415; c=relaxed/simple; bh=1RsbASWy/xxCWWdUA20T/v6pJcPaDbsZrBSmmJLqQPk=; h=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References: Content-Type:MIME-Version; b=lxBhp0WhgH03t81GWOm6LVKUjgALgJAf16DYL69vTpH6rhkGiE/yAcH+IIziwHteKCcmJ9+8+Nf2HJN8Ql1/vnL8QNjzKqm6Xs+y26NkBUNRKzsQX/q0KXvD+3LOkJ4zY4WdblzeXpPZ/6SCGto6YTZD5C22qSDnfnqKn0aWKTI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=FDLLsLGg; arc=none smtp.client-ip=198.175.65.9 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FDLLsLGg" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1774289413; x=1805825413; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=1RsbASWy/xxCWWdUA20T/v6pJcPaDbsZrBSmmJLqQPk=; b=FDLLsLGglug4k6m440kXSN4H4ZU/ra7huccscDVLyTthnjz+Hzlc0htL UdJ8bS6734TWpbSn4e7IeA1odtrf+Cl8MEzV16KZU4VjUMxmdsLQWWXGx KUUz0WWN6/9UJCBaBcfQAoiPjqj0fVz7z16I8xuIVeWH7xDh2sTFTf6sM N5mHlZi/fOTVwhehvX/8rQsICp+p9bPZwY68LF8BV+C26lWSzzsNdmXJv w+PsS8Upfx0JtCux7DFJrMoObu8+n2bHWBO5Y1gyf9WHyFCiZkx8VnCXR Gho6u/fC3rtStDslqFPnpY0giJL3FENdo2EtArOzO18Vy92ddByYhVkAX A==; X-CSE-ConnectionGUID: DAfDVLFGR7eUTAoEFQxkJw== X-CSE-MsgGUID: zSlUjaf5QDCzOxko6Eyy7A== X-IronPort-AV: E=McAfee;i="6800,10657,11738"; a="97915947" X-IronPort-AV: E=Sophos;i="6.23,137,1770624000"; d="scan'208";a="97915947" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2026 11:09:25 -0700 X-CSE-ConnectionGUID: iykV1pMnQiKf0MjiCkK+vA== X-CSE-MsgGUID: nJ8/rbJHQjSiQZ7Ffwr1UA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,137,1770624000"; d="scan'208";a="224073679" Received: from schen9-mobl4.amr.corp.intel.com (HELO [10.125.109.221]) ([10.125.109.221]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2026 11:09:26 -0700 Message-ID: <60a23cdbc2341b5fb08cb5b42a6c27becb901a91.camel@linux.intel.com> Subject: Re: [PATCH v2 3/4] sched/rt: Split root_domain->rto_count to per-NUMA-node counters From: Tim Chen To: Peter Zijlstra , Pan Deng Cc: mingo@kernel.org, linux-kernel@vger.kernel.org, tianyou.li@intel.com, yu.c.chen@intel.com Date: Mon, 23 Mar 2026 11:09:24 -0700 In-Reply-To: <20260320102440.GT3738786@noisy.programming.kicks-ass.net> References: <20260320102440.GT3738786@noisy.programming.kicks-ass.net> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.58.1 (3.58.1-1.fc43) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On Fri, 2026-03-20 at 11:24 +0100, Peter Zijlstra wrote: > On Mon, Jul 21, 2025 at 02:10:25PM +0800, Pan Deng wrote: > > As a complementary, this patch splits > > `rto_count` into per-numa-node counters to reduce the contention. >=20 > Right... so Tim, didn't we have similar patches for task_group::load_avg > or something like that? Whatever did happen there? Can we share common > infra? We did talk about introducing per NUMA counter for load_avg. We went with limiting the update rate of load_avg to not more than once per msec in commit 1528c661c24b4 to control the cache bounce. >=20 > Also since Tim is sitting on this LLC infrastructure, can you compare > per-node and per-llc for this stuff? Somehow I'm thinking that a 2 > socket 480 CPU system only has like 2 nodes and while splitting this > will help some, that might not be excellent. You mean enhancing the per NUMA counter to per LLC? I think that makes sense to reduce the LLC cache bounce if there are multiple LLCs per NUMA node. Tim >=20 > Please test on both Intel and AMD systems, since AMD has more of these > LLC things on. >=20 >=20