From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8B81382F1A for ; Wed, 13 May 2026 20:33:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778704420; cv=none; b=BfnNM3dxg2q8u+3/3UzvObC/fpsC64lYeXnNrA61Ti/5qHL1aojgppc48XDHy2i87UqsEstPO4NO7NgCdOF47cSitX/R/vdLOk/ZSVjcyMv5i50RLy1yeeb5rmkO0K2HdootLTSe/6RlVp3pxyzD2cBHkus71+0gberAmwHRBj4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778704420; c=relaxed/simple; bh=My8qPX4ZhhSb/OISYHVK8VtGkA3KRPphT3KhlOl4m1w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=k2ToAVf7JagLN5Hc/DzTpEuhsi+J5/8twfijGm9Ha0Cg8cWUDQbfi/S2QPYLSac9bka2f2RfRCYjN3VfIVKW5HgqqFLcIzMB5yA0aiSkBBHt5hSxKMD6WY1hLggvJLPxnq486/3bKUU/J97fcapHMGyBMU6B1nK+tGn28SwHkGA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=HpcNdGND; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="HpcNdGND" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1778704419; x=1810240419; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=My8qPX4ZhhSb/OISYHVK8VtGkA3KRPphT3KhlOl4m1w=; b=HpcNdGNDW4pUV++fypwQN7/IDIQia2bE4BllSI1TE/1AC7VWDpJfo9Pw wE1jlWy2TjwABVfgaQ8zQOXUqpSoPZwdQsujbRIK0lYKyCLIyEI92oPO8 dGm0Cst4gOPXprFgDeBF0EPO1eIIzxq68QKmimKGvbd5rlRmJOdy9rm9g LH7SGWY1rqnae/3tH69iFv91YnLAXHEOuFzwFrTTIf9NOvuSfyXaUu1tU JzgwuvzLOYonJ3KHRfpDy7aAuqlFi68+i/FwWNrrm8//POqHdG8j2lbuK 052lMk8OQjcvb0zr3XEM3CTctMZBOjMxUV0Hx0SMPlICH0KL9N7AjqLK1 g==; X-CSE-ConnectionGUID: TZMwbnxJRRi7XTgKohedTw== X-CSE-MsgGUID: IaEnJwDKS7yx+rXABrMo0A== X-IronPort-AV: E=McAfee;i="6800,10657,11785"; a="79623082" X-IronPort-AV: E=Sophos;i="6.23,233,1770624000"; d="scan'208";a="79623082" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2026 13:33:38 -0700 X-CSE-ConnectionGUID: rrDAu57tQrGYMBagZkOHPg== X-CSE-MsgGUID: kMYKS0VHT+K8HUuq30u4PA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,233,1770624000"; d="scan'208";a="238076352" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by orviesa008.jf.intel.com with ESMTP; 13 May 2026 13:33:38 -0700 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , Vincent Guittot Cc: Chen Yu , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Tim Chen , Aubrey Li , Zhao Liu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , Luo Gengkun , linux-kernel@vger.kernel.org Subject: [Patch v4 06/16] sched/cache: Add user control to adjust the aggressiveness of cache-aware scheduling Date: Wed, 13 May 2026 13:39:17 -0700 Message-Id: <1c62cc060ba2b33d7b1f0ed98b3390128edbae93.1778703694.git.tim.c.chen@linux.intel.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Chen Yu Introduce a set of debugfs knobs to control how aggressively the cache aware scheduling does the task aggregation. (1) aggr_tolerance With sched_cache enabled, the scheduler uses a process's footprint as a proxy for its LLC footprint to determine if aggregating tasks on the preferred LLC could cause cache contention. If the footprint exceeds the LLC size, aggregation is skipped. Since the kernel cannot efficiently track per-task cache usage (resctrl is user-space only), userspace can provide a more accurate hint. Introduce /sys/kernel/debug/sched/llc_balancing/aggr_tolerance to let users control how strictly footprint limits aggregation. Values range from 0 to 100: - 0: Cache-aware scheduling is disabled. - 1: Strict; tasks with footprint larger than LLC size are skipped. - >=100: Aggressive; tasks are aggregated regardless of footprint. For example, with a 32MB L3 cache: - aggr_tolerance=1 -> tasks with footprint > 32MB are skipped. - aggr_tolerance=99 -> tasks with footprint > 784GB are skipped (784GB = (1 + (99 - 1) * 256) * 32MB). Similarly, /sys/kernel/debug/sched/llc_balancing/aggr_tolerance also controls how strictly the number of active threads is considered when doing cache aware load balance. The number of SMTs is also considered. High SMT counts reduce the aggregation capacity, preventing excessive task aggregation on SMT-heavy systems like Power10/Power11. Yangyu suggested introducing separate aggregation controls for the number of active threads and memory footprint checks. Since there are plans to add per-process/task group controls, fine-grained tunables are deferred to that implementation. (2) epoch_period, epoch_affinity_timeout, imb_pct, overaggr_pct are also turned into tunables. Tested-by: Tingyin Duan Suggested-by: K Prateek Nayak Suggested-by: Madadi Vineeth Reddy Suggested-by: Shrikanth Hegde Suggested-by: Tingyin Duan Suggested-by: Jianyong Wu Suggested-by: Yangyu Chen Signed-off-by: Chen Yu Co-developed-by: Tim Chen Signed-off-by: Tim Chen --- kernel/sched/debug.c | 10 +++++++ kernel/sched/fair.c | 68 ++++++++++++++++++++++++++++++++++++++------ kernel/sched/sched.h | 5 ++++ 3 files changed, 75 insertions(+), 8 deletions(-) diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c index 2eae67cd2ba2..fe569539e888 100644 --- a/kernel/sched/debug.c +++ b/kernel/sched/debug.c @@ -670,6 +670,16 @@ static __init int sched_init_debug(void) llc = debugfs_create_dir("llc_balancing", debugfs_sched); debugfs_create_file("enabled", 0644, llc, NULL, &sched_cache_enable_fops); + debugfs_create_u32("aggr_tolerance", 0644, llc, + &llc_aggr_tolerance); + debugfs_create_u32("epoch_period", 0644, llc, + &llc_epoch_period); + debugfs_create_u32("epoch_affinity_timeout", 0644, llc, + &llc_epoch_affinity_timeout); + debugfs_create_u32("overaggr_pct", 0644, llc, + &llc_overaggr_pct); + debugfs_create_u32("imb_pct", 0644, llc, + &llc_imb_pct); #endif debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops); diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a10116ffe0d1..01ce646792ff 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1375,6 +1375,11 @@ static void set_next_buddy(struct sched_entity *se); */ #define EPOCH_PERIOD (HZ / 100) /* 10 ms */ #define EPOCH_LLC_AFFINITY_TIMEOUT 5 /* 50 ms */ +__read_mostly unsigned int llc_aggr_tolerance = 1; +__read_mostly unsigned int llc_epoch_period = EPOCH_PERIOD; +__read_mostly unsigned int llc_epoch_affinity_timeout = EPOCH_LLC_AFFINITY_TIMEOUT; +__read_mostly unsigned int llc_imb_pct = 20; +__read_mostly unsigned int llc_overaggr_pct = 50; static int llc_id(int cpu) { @@ -1384,11 +1389,25 @@ static int llc_id(int cpu) return per_cpu(sd_llc_id, cpu); } +static inline int get_sched_cache_scale(int mul) +{ + unsigned int tol = READ_ONCE(llc_aggr_tolerance); + + if (!tol) + return 0; + + if (tol >= 100) + return INT_MAX; + + return (1 + (tol - 1) * mul); +} + static bool exceed_llc_capacity(struct mm_struct *mm, int cpu) { #ifdef CONFIG_NUMA_BALANCING unsigned long llc, footprint; struct sched_domain *sd; + int scale; guard(rcu)(); @@ -1404,7 +1423,28 @@ static bool exceed_llc_capacity(struct mm_struct *mm, int cpu) llc = sd->llc_bytes; footprint = READ_ONCE(mm->sc_stat.footprint); - return (llc < (footprint * PAGE_SIZE)); + /* + * Scale the LLC size by 256*llc_aggr_tolerance + * and compare it to the task's footprint. + * + * Suppose the L3 size is 32MB. If the + * llc_aggr_tolerance is 1: + * When the footprint is larger than 32MB, the + * process is regarded as exceeding the LLC + * capacity. If the llc_aggr_tolerance is 99: + * When the footprint is larger than 784GB, the + * process is regarded as exceeding the LLC + * capacity: + * 784GB = (1 + (99 - 1) * 256) * 32MB + * If the llc_aggr_tolerance is 100: + * ignore the footprint and do the aggregation + * anyway. + */ + scale = get_sched_cache_scale(256); + if (scale == INT_MAX) + return false; + + return ((llc * (u64)scale) < (footprint * PAGE_SIZE)); } #endif return false; @@ -1413,11 +1453,21 @@ static bool exceed_llc_capacity(struct mm_struct *mm, int cpu) static bool invalid_llc_nr(struct mm_struct *mm, struct task_struct *p, int cpu) { + int scale; + if (get_nr_threads(p) <= 1) return true; + /* + * Scale the number of 'cores' in a LLC by llc_aggr_tolerance + * and compare it to the task's active threads. + */ + scale = get_sched_cache_scale(1); + if (scale == INT_MAX) + return false; + return !fits_capacity((mm->sc_stat.nr_running_avg * cpu_smt_num_threads), - per_cpu(sd_llc_size, cpu)); + (scale * per_cpu(sd_llc_size, cpu))); } static void account_llc_enqueue(struct rq *rq, struct task_struct *p) @@ -1513,13 +1563,14 @@ static inline void __update_mm_sched(struct rq *rq, { lockdep_assert_held(&rq->cpu_epoch_lock); + unsigned int period = max(READ_ONCE(llc_epoch_period), 1U); unsigned long n, now = jiffies; long delta = now - rq->cpu_epoch_next; if (delta > 0) { - n = (delta + EPOCH_PERIOD - 1) / EPOCH_PERIOD; + n = (delta + period - 1) / period; rq->cpu_epoch += n; - rq->cpu_epoch_next += n * EPOCH_PERIOD; + rq->cpu_epoch_next += n * period; __shr_u64(&rq->cpu_runtime, n); } @@ -1611,7 +1662,7 @@ void account_mm_sched(struct rq *rq, struct task_struct *p, s64 delta_exec) * If this process hasn't hit task_cache_work() for a while invalidate * its preferred state. */ - if (epoch - READ_ONCE(mm->sc_stat.epoch) > EPOCH_LLC_AFFINITY_TIMEOUT || + if (epoch - READ_ONCE(mm->sc_stat.epoch) > llc_epoch_affinity_timeout || invalid_llc_nr(mm, p, cpu_of(rq)) || exceed_llc_capacity(mm, cpu_of(rq))) { if (mm->sc_stat.cpu != -1) @@ -1740,7 +1791,8 @@ static void task_cache_work(struct callback_head *work) /* only 1 thread is allowed to scan */ if (!try_cmpxchg(&mm->sc_stat.next_scan, &next_scan, - now + EPOCH_PERIOD)) + now + max_t(unsigned long, + READ_ONCE(llc_epoch_period), 1))) return; curr_cpu = task_cpu(p); @@ -10232,7 +10284,7 @@ static inline int task_is_ineligible_on_dst_cpu(struct task_struct *p, int dest_ */ static bool fits_llc_capacity(unsigned long util, unsigned long max) { - u32 aggr_pct = 50; + u32 aggr_pct = llc_overaggr_pct; /* * For single core systems, raise the aggregation @@ -10252,7 +10304,7 @@ static bool fits_llc_capacity(unsigned long util, unsigned long max) */ /* Allows dst util to be bigger than src util by up to bias percent */ #define util_greater(util1, util2) \ - ((util1) * 100 > (util2) * 120) + ((util1) * 100 > (util2) * (100 + llc_imb_pct)) static __maybe_unused bool get_llc_stats(int cpu, unsigned long *util, unsigned long *cap) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index f499d5dd1130..27409399137c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -4072,6 +4072,11 @@ static inline void mm_cid_switch_to(struct task_struct *prev, struct task_struct DECLARE_STATIC_KEY_FALSE(sched_cache_present); DECLARE_STATIC_KEY_FALSE(sched_cache_active); extern int sysctl_sched_cache_user; +extern unsigned int llc_aggr_tolerance; +extern unsigned int llc_epoch_period; +extern unsigned int llc_epoch_affinity_timeout; +extern unsigned int llc_imb_pct; +extern unsigned int llc_overaggr_pct; static inline bool sched_cache_enabled(void) { -- 2.32.0