From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C51E83E314B for ; Fri, 10 Apr 2026 17:12:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.10 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775841132; cv=none; b=V4OdAPsuN5cfiyncyZ8FCPEaPHctfImmhGEueh2urg/5uE0XItuMwU/vagNE8UAkwqwPuQgNOE6qzk9L7V9Y1QAEbGO5CswRnZDtdjDktqv8Tqa9TzD8wr6Ev/S6QOm26k+s7PhJozOQa2COeE40UVtRzy3YzER9ljx48MXtTMM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775841132; c=relaxed/simple; bh=8Zh4ATdCj2fodSe+cgU9gL+pVWYPIWqT+MZj3DCkS1M=; h=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References: Content-Type:MIME-Version; b=H5OvestLdhHnPsKEp7XnVvnF8eF+x02Lx9TKUMP+t0BOiUHk2AvaQYR1PBUJD3FIIenyqqzDCEFYakVsVYfvK8isVbsjNhPjSxQP4i9MAXUz/KW1m/PIzVhOJcuoyqAZD9cKb/sxXgvNVTaeYCM26XVqhaSPXuqKQv8LtafCOqg= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=PmTNUvIC; arc=none smtp.client-ip=198.175.65.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="PmTNUvIC" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1775841128; x=1807377128; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=8Zh4ATdCj2fodSe+cgU9gL+pVWYPIWqT+MZj3DCkS1M=; b=PmTNUvICADFeNFAdM00KSWe7BUGqQBbJsFcJtwgfYcz2mQG8JiwyQ5Hi RO0T8nDW/tAwBnfkGcNIKtuitsc0zNuvDfLZkbwq7A53nrnhMdyuvVFmp 7OPykUUO/QNnFmiPSBDzutH6Z2ECMiFvGx1t69Gsd/8zOiCKQIr/lADug QTzqw8aMamDA7OAgv1ZWWi64gRVGhM1IaCpUd08W9j2K+FMRGGaPjCAU2 sXBjn/RB4gyMmK3c/L5ZqFcCPRoQohxk+3oBb3m35N013AnbNAZ4qHgb2 gnBhO4WxeT6Y6KaFZgHhhpFAd+KsZvJAAn9pTphVgToKApYws8b+1WvVa A==; X-CSE-ConnectionGUID: 71bhculOTMqsJlduNZZ40Q== X-CSE-MsgGUID: FcrvWvJZRT+ISWFSXCzqfg== X-IronPort-AV: E=McAfee;i="6800,10657,11755"; a="94258356" X-IronPort-AV: E=Sophos;i="6.23,171,1770624000"; d="scan'208";a="94258356" Received: from orviesa003.jf.intel.com ([10.64.159.143]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2026 10:12:07 -0700 X-CSE-ConnectionGUID: O19bRKLIS3C51MmSnBzRtg== X-CSE-MsgGUID: 4h+n7rCQTqStljzsMbHoYQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,171,1770624000"; d="scan'208";a="233204017" Received: from schen9-mobl4.amr.corp.intel.com (HELO [10.125.109.71]) ([10.125.109.71]) by ORVIESA003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Apr 2026 10:12:07 -0700 Message-ID: <615986fc142de4c25a918083f6148752b1b341f1.camel@linux.intel.com> Subject: Re: [Patch v4 02/22] sched/cache: Limit the scan number of CPUs when calculating task occupancy From: Tim Chen To: "Chen, Yu C" , Luo Gengkun Cc: Peter Zijlstra , K Prateek Nayak , Ingo Molnar , Vincent Guittot , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Aubrey Li , Zhao Liu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , linux-kernel@vger.kernel.org Date: Fri, 10 Apr 2026 10:12:06 -0700 In-Reply-To: References: <57ed5fcec9b242803fe4ea2ce6e7f3de6a6efc6b.1775065312.git.tim.c.chen@linux.intel.com> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.58.1 (3.58.1-1.fc43) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On Fri, 2026-04-10 at 15:29 +0800, Chen, Yu C wrote: > Hi Gengkun, >=20 > On 4/9/2026 9:17 PM, Luo Gengkun wrote: > >=20 > >=20 > > On 2026/4/2 5:52, Tim Chen wrote: > > > From: Chen Yu >=20 > [ ... ] >=20 > >=20 > > To address the issue of scanning overhead, there is a more targeted=20 > > approach: only scanning the CPUs actually accessed by the process, and= =20 > > evicting these CPUs when they remain unaccessed for a specific period o= f=20 > > time. > >=20 >=20 > Thanks for the patch. This approach looks quite sensible to me, and >=20 > [ ... ] >=20 > >=20 > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * The update to mm->s= c_stat should not be reordered > > @@ -1582,6 +1584,7 @@ void account_mm_sched(struct rq *rq, struct=20 > > task_struct *p, s64 delta_exec) > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 pcpu_sched->runtime +=3D delta_exec; > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 rq->cpu_runtime +=3D delta_exec; > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 epoch =3D rq->cpu_epoch; > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0=C2=A0 cpumask_set_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus); >=20 > I would refer a check before writing to avoid c2c overhead: > if (!cpumask_test_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus)) > cpumask_set_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus); I think a similar check is also needed for clearing of visited CPU. It is = possible that=20 the visited_cpus mask has already been cleared long time back. Change + if (llc_epoch_visited_timeout && (rq->cpu_epoch - pcpu_sched->epoch) > + llc_epoch_visited_timeout) { + cpumask_clear_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus); + continue; + } to + if (llc_epoch_visited_timeout && + cpumask_test_cpu(cpu_of(rq), &mm->sc_stat.visitied_cpus) && + (rq->cpu_epoch - pcpu_sched->epoch) > llc_epoch_visited_timeout) { + cpumask_clear_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus); + continue; + } Thanks. Tim