From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6793C368D66 for ; Wed, 13 May 2026 20:33:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.175.65.17 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778704415; cv=none; b=OAqVDa/lFfVL3UQGME5wTQubZDB+IetqwO4vbdDtOwYbO+peIiwXp0ZdtuNMMiGFkVou/LX7z38n7D8ihGjTJ4yaV0++HkpW8bkZCp0yT2Gis5AjrUJxgVX97HjVSXDDtDlas10aXFM2iT21wFrAlR/FMuhNM9rNPK7BkPzTglA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778704415; c=relaxed/simple; bh=z9tPKi8jNn9rAGHAS0N2TDEh0KhDuGReyOUiH6WoTpI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=n3TISUaEEdsz+Gzk7rz/xM06IymVU7r1Mk/tksDobLZCWzLEO7Pz+8LVT60jwFroa0yyRiiQnBJaPUI2Zdod5uWdxeodWsTWRs5KMR0NTbuMckruYgZLW7WUPzvGCK7jC9fA/el1XDlwmKYKmlSrhsAJTBHXGIAl9MzqdLLYEl8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com; spf=pass smtp.mailfrom=linux.intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=GoHO1cXB; arc=none smtp.client-ip=198.175.65.17 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="GoHO1cXB" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1778704414; x=1810240414; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=z9tPKi8jNn9rAGHAS0N2TDEh0KhDuGReyOUiH6WoTpI=; b=GoHO1cXBlFwBLLAU2V60P1oxtpG/2lqj9nHimrI05zhMAj0bWjMnHmbr 3iacvsC4wfvXpU48EUxwwywgcFseJfqBlXD0Jy4hdKI1OFmC9mGR8Nxid eOpxJrdIRTVq5DA+6Y6N4cpK5ILENuvIRuYtWy+j3BAogJwstjG/N0MSY eWcWIAT3z+FCjovCcSe6KhzPue+ZToePYjjUEBhoTRhGBoPjER6Wx+OO0 FTbVzXajZEwTBKj8g/9wcqnCEuvVcMAL6CCfwe7wCDulFt7I2nlXZThLK 8GE0uZVROfhe9tB9JOrxt0bh3G1k6JWK7N7UNZpJTcpKGSEYTcbTA1lYE g==; X-CSE-ConnectionGUID: w/VUQ7TVRhOytU2kUjy9Dg== X-CSE-MsgGUID: Id7mw4tKTVOVj3zZcJM0YA== X-IronPort-AV: E=McAfee;i="6800,10657,11785"; a="79622991" X-IronPort-AV: E=Sophos;i="6.23,233,1770624000"; d="scan'208";a="79622991" Received: from orviesa008.jf.intel.com ([10.64.159.148]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 May 2026 13:33:34 -0700 X-CSE-ConnectionGUID: m1wmAwtnTDGY3NBWlpu8wg== X-CSE-MsgGUID: PGlpxhm9SMioCgCxq9pXwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.23,233,1770624000"; d="scan'208";a="238076320" Received: from b04f130c83f2.jf.intel.com ([10.165.154.98]) by orviesa008.jf.intel.com with ESMTP; 13 May 2026 13:33:33 -0700 From: Tim Chen To: Peter Zijlstra , Ingo Molnar , K Prateek Nayak , Vincent Guittot Cc: Chen Yu , Juri Lelli , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , Madadi Vineeth Reddy , Hillf Danton , Shrikanth Hegde , Jianyong Wu , Yangyu Chen , Tingyin Duan , Vern Hao , Vern Hao , Len Brown , Tim Chen , Aubrey Li , Zhao Liu , Chen Yu , Adam Li , Aaron Lu , Tim Chen , Josh Don , Gavin Guo , Qais Yousef , Libo Chen , Luo Gengkun , linux-kernel@vger.kernel.org Subject: [Patch v4 02/16] sched/cache: Disable cache aware scheduling for processes with high thread counts Date: Wed, 13 May 2026 13:39:13 -0700 Message-Id: X-Mailer: git-send-email 2.32.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Chen Yu A performance regression was observed by Prateek when running hackbench with many threads per process (high fd count). To avoid this, processes with a large number of active threads are excluded from cache-aware scheduling. With sched_cache enabled, record the number of active threads in each process during the periodic task_cache_work(). While iterating over CPUs, if the currently running task belongs to the same process as the task that launched task_cache_work(), increment the active thread count. If the number of active threads within the process exceeds the number of Cores (divided by the SMT number) in the LLC, do not enable cache-aware scheduling. However, on systems with a smaller number of CPUs within 1 LLC, like Power10/Power11 with SMT4 and an LLC size of 4, this check effectively disables cache-aware scheduling for any process. One possible solution suggested by Peter is to use an LLC-mask instead of a single LLC value for preference. Once there are a 'few' LLCs as preference, this constraint becomes a little easier. It could be an enhancement in the future. For users who wish to perform task aggregation regardless, a debugfs knob is provided for tuning in a subsequent change. Tested-by: Tingyin Duan Suggested-by: K Prateek Nayak Suggested-by: Aaron Lu Signed-off-by: Chen Yu Co-developed-by: Tim Chen Signed-off-by: Tim Chen --- include/linux/sched.h | 1 + kernel/sched/fair.c | 48 ++++++++++++++++++++++++++++++++++++++----- 2 files changed, 44 insertions(+), 5 deletions(-) diff --git a/include/linux/sched.h b/include/linux/sched.h index 6d883f109ba3..6701911eaaf7 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2423,6 +2423,7 @@ struct sched_cache_stat { struct sched_cache_time __percpu *pcpu_sched; raw_spinlock_t lock; unsigned long epoch; + u64 nr_running_avg; unsigned long next_scan; int cpu; } ____cacheline_aligned_in_smp; diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index a759ea669d74..808f614fc2d2 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -1384,6 +1384,12 @@ static int llc_id(int cpu) return per_cpu(sd_llc_id, cpu); } +static bool invalid_llc_nr(struct mm_struct *mm, int cpu) +{ + return !fits_capacity((mm->sc_stat.nr_running_avg * cpu_smt_num_threads), + per_cpu(sd_llc_size, cpu)); +} + static void account_llc_enqueue(struct rq *rq, struct task_struct *p) { struct sched_domain *sd; @@ -1452,7 +1458,7 @@ void mm_init_sched(struct mm_struct *mm, mm->sc_stat.epoch = epoch; mm->sc_stat.cpu = -1; mm->sc_stat.next_scan = jiffies; - + mm->sc_stat.nr_running_avg = 0; /* * The update to mm->sc_stat should not be reordered * before initialization to mm's other fields, in case @@ -1574,7 +1580,8 @@ void account_mm_sched(struct rq *rq, struct task_struct *p, s64 delta_exec) * If this process hasn't hit task_cache_work() for a while invalidate * its preferred state. */ - if (epoch - READ_ONCE(mm->sc_stat.epoch) > EPOCH_LLC_AFFINITY_TIMEOUT) { + if (epoch - READ_ONCE(mm->sc_stat.epoch) > EPOCH_LLC_AFFINITY_TIMEOUT || + invalid_llc_nr(mm, cpu_of(rq))) { if (mm->sc_stat.cpu != -1) mm->sc_stat.cpu = -1; } @@ -1660,14 +1667,32 @@ static void get_scan_cpumasks(cpumask_var_t cpus, struct task_struct *p) cpumask_copy(cpus, cpu_online_mask); } +static inline void update_avg_scale(u64 *avg, u64 sample) +{ + int factor = per_cpu(sd_llc_size, raw_smp_processor_id()); + s64 diff = sample - *avg; + u32 divisor; + + /* + * Scale the divisor based on the number of CPUs contained + * in the LLC. This scaling ensures smaller LLC domains use + * a smaller divisor to achieve more precise sensitivity to + * changes in nr_running, while larger LLC domains are capped + * at a maximum divisor of 8 which is the default smoothing + * factor of EWMA in update_avg(). + */ + divisor = clamp_t(u32, (factor >> 2), 2, 8); + *avg += div64_s64(diff, divisor); +} + static void task_cache_work(struct callback_head *work) { unsigned long next_scan, now = jiffies; - struct task_struct *p = current; + struct task_struct *p = current, *cur; + int cpu, m_a_cpu = -1, nr_running = 0; + unsigned long curr_m_a_occ = 0; struct mm_struct *mm = p->mm; unsigned long m_a_occ = 0; - unsigned long curr_m_a_occ = 0; - int cpu, m_a_cpu = -1; cpumask_var_t cpus; WARN_ON_ONCE(work != &p->cache_work); @@ -1711,6 +1736,11 @@ static void task_cache_work(struct callback_head *work) m_occ = occ; m_cpu = i; } + + cur = rcu_dereference_all(cpu_rq(i)->curr); + if (cur && !(cur->flags & (PF_EXITING | PF_KTHREAD)) && + cur->mm == mm) + nr_running++; } /* @@ -1754,6 +1784,7 @@ static void task_cache_work(struct callback_head *work) mm->sc_stat.cpu = m_a_cpu; } + update_avg_scale(&mm->sc_stat.nr_running_avg, nr_running); free_cpumask_var(cpus); } @@ -10294,6 +10325,13 @@ static enum llc_mig can_migrate_llc_task(int src_cpu, int dst_cpu, if (cpu < 0 || cpus_share_cache(src_cpu, dst_cpu)) return mig_unrestricted; + /* skip cache aware load balance for too many threads */ + if (invalid_llc_nr(mm, dst_cpu)) { + if (mm->sc_stat.cpu != -1) + mm->sc_stat.cpu = -1; + return mig_unrestricted; + } + if (cpus_share_cache(dst_cpu, cpu)) to_pref = true; else if (cpus_share_cache(src_cpu, cpu)) -- 2.32.0