From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A162395D9F for ; Wed, 15 Apr 2026 17:22:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.19 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776273759; cv=none; b=OxPJkwViIkEuvc8XgzLOzqiNXq9/Pj3mLvAoIC8+iebX1NuSx1gxwdqhPfmPvnf6FYOaK2rp8lxScft/IXjkhyxXflNAeipZAty2AGnAwBlgFCR7Nh7DFiLB3vztIEu1zglhTNIqM6RB4OGDn4sMt0fOv0RbsXl8jM2ARnnXcB4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776273759; c=relaxed/simple; bh=SUv1iuyKslkBEuZtoAbBeVd1KZs9Cd42G8q7bdwVLNg=; h=Message-ID:Subject:From:To:Cc:Date:In-Reply-To:References: Content-Type:MIME-Version; b=rLgp0VHtnBNlEqjXwfiyGoqSljWYg4h67sCmuqWsH9iPdjjA0KA1SQmYiJ/qaBC1L59HUk/Nn2KPMHSQmyU1T8WTdq1AIygY+JUOAZfmbY0RRiBTOnf1UmO4z0L4hx2KzqisRTIvO65J9Kg24y2Vue/duCl2Bg6swtrGBD46BkI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=Zd6UjzcL; arc=none smtp.client-ip=192.198.163.19 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Zd6UjzcL" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1776273757; x=1807809757; h=message-id:subject:from:to:cc:date:in-reply-to: references:content-transfer-encoding:mime-version; bh=SUv1iuyKslkBEuZtoAbBeVd1KZs9Cd42G8q7bdwVLNg=; b=Zd6UjzcLfOwIelC5UNZyOGPnAlBaMu+oSr2Ge+w5okhlz2WZzfghNwjs 63pf5gBfBKi3j35qZ9eWdCVtUARXTvTzbdULswNakYVazKucPrtTaYu77 H7IxbhDqho9zQJMiHqFo0d69i9TvlbR7yoTSl5Jf5t34GxlbPWjvRZy2a Y8hJ8t2U0CZPUW5tjVZIWukTPplMayRCRErvVKtiasIgZumUn6YUkBEVs tc/UinwwecFmd+T5YubcfUD/hmuw6XkrBc0d1VfJZuTKtr//mkfJTQ81o NiCkXWK8mhFR4KgQ3yt0zXnuMBQS+QN1a+y2nMbIBOlFKPIWC1InzcMH2 Q==; X-CSE-ConnectionGUID: 9pYLNBMhSW6iVNzprT0Hsw== X-CSE-MsgGUID: ZlKfZgclQuuXVBua97D/Nw== X-IronPort-AV: E=McAfee;i="6800,10657,11760"; a="76296226" X-IronPort-AV: E=Sophos;i="6.23,179,1770624000"; d="scan'208";a="76296226" Received: from orviesa006.jf.intel.com ([10.64.159.146]) by fmvoesa113.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2026 10:22:36 -0700 X-CSE-ConnectionGUID: TRue8WjiQ4mgDXtQFMIRNw== X-CSE-MsgGUID: iQmeGntlRFiyc6vIwNVv5g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,179,1770624000"; d="scan'208";a="229465205" Received: from unknown (HELO [10.241.243.39]) ([10.241.243.39]) by orviesa006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Apr 2026 10:22:36 -0700 Message-ID: <81e67b0e9f4b2e85024e57d461b2a7eef9d21f5b.camel@linux.intel.com> Subject: Re: [RFC PATCH] sched/fair: dynamically scale the period of cache work From: Tim Chen To: Jianyong Wu , yu.c.chen@intel.com, luogengkun2@huawei.com Cc: peterz@infradead.org, kprateek.nayak@amd.com, mingo@redhat.com, vincent.guittot@linaro.org, juri.lelli@redhat.com, dietmar.eggemann@arm.com, rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de, vschneid@redhat.com, vineethr@linux.ibm.com, hdanton@sina.com, sshegde@linux.ibm.com, jianyong.wu@outlook.com, cyy@cyyself.name, tingyin.duan@gmail.com, vernhao@tencent.com, haoxing990@gmail.com, len.brown@intel.com, aubrey.li@intel.com, zhao1.liu@intel.com, adamli@os.amperecomputing.com, ziqianlu@bytedance.com, tim.c.chen@intel.com, joshdon@google.com, gavinguo@igalia.com, qyousef@layalina.io, libchen@purestorage.com, linux-kernel@vger.kernel.org, huangsj@hygon.cn Date: Wed, 15 Apr 2026 10:22:35 -0700 In-Reply-To: <20260413072309.2663668-1-wujianyong@hygon.cn> References: <4fb7a6da-447d-452a-a920-7cd39b939ccb@intel.com> <20260413072309.2663668-1-wujianyong@hygon.cn> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable User-Agent: Evolution 3.58.1 (3.58.1-1.fc43) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 On Mon, 2026-04-13 at 15:23 +0800, Jianyong Wu wrote: > When a preferred LLC is selected and remains stable, task_cache_work does > not need to run frequently. Because it scans all system CPUs for > computation, high-frequency execution hurts performance. We thus reduce > the scan rate in such cases. >=20 Thanks for your patch proposal. > On the other hand, if the preferred node becomes suboptimal, we should You mean preferred LLC right? preferred node is from NUMA balancing. > increase the scan frequency to quickly find a better placement. The scan > period is therefore dynamically adjusted. >=20 > Signed-off-by: Jianyong Wu >=20 > --- > Hi ChenYu, Tim, Gengkun, >=20 > I have another approach to address this issue, based on the observation > that the scan work can be canceled if the preferred node is stable.This > patch merely demonstrates the idea, but still needs more testing to > verify its functionality. I'm sending it out early to gather feedback and > opinions. >=20 >=20 <...> > @@ -1822,9 +1835,35 @@ static void task_cache_work(struct callback_head *= work) > * 3. 2X is chosen based on test results, as it delivers > * the optimal performance gain so far. > */ > - mm->sc_stat.cpu =3D m_a_cpu; > + if (m_a_occ > (2 * curr_m_a_occ)) > + mm->sc_stat.cpu =3D m_a_cpu; > + > + if (!mm->sc_stat.last_reset_tick) > + mm->sc_stat.last_reset_tick =3D now; > + > + /* Change scan_period when preferred LLC changed */ > + if (((mm->sc_stat.cpu !=3D -1) && (m_a_cpu !=3D -1) > + && (llc_id(mm->sc_stat.cpu) !=3D llc_id(m_a_cpu))) > + || need_scan) { > + if (!need_scan) > + need_scan =3D 1; > + > + WRITE_ONCE(mm->sc_stat.scan_period, > + max(mm->sc_stat.scan_period >> 1, llc_scan_period_min)); > + WRITE_ONCE(mm->sc_stat.last_reset_tick, now); > + } > + } > + > + if ((now - READ_ONCE(mm->sc_stat.last_reset_tick) > llc_scan_period_thr= eshold) > + && !need_scan) { > + WRITE_ONCE(mm->sc_stat.scan_period, min(mm->sc_stat.scan_period << 1, > + llc_scan_period_max)); I think that llc_scan_period_max should be the same as llc_epoch_affinity_t= imeout. We should not increase the scan period beyond that as that's the time scale where we consider cache data relevant. =20 Tim