From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3100C3AC0C4 for ; Mon, 13 Apr 2026 08:43:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776069822; cv=none; b=QphtIYpYGf25qS8HYsS9DZzXT12iSOFY8uvQV76zBkeoYkA3k/xqiRxgfABjI0SoJeWt/6wQadXWy62dhis80IlEZJiAI0p59/OANqzvJSCPrXLcmKYTICCuV9hg8B+lWV3OKQ02L6dzwt1XtQJ8szuL/ydQREcx+84+72jigvk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776069822; c=relaxed/simple; bh=2poe5ZjkxZ66sftwyBZ2Y3HzPBAZQfToxK5G3ObgWvA=; h=Date:From:To:Cc:Subject:Message-ID:MIME-Version:Content-Type: Content-Disposition; b=fSRmtbD/wCeVFGQeAWrJmg+w0A6v01qF6LkrZC7la+rVJybAgP8SM7kvsYMcuwd5f+Nc4781UVSlL+IOJDpydyd5jxfrCj/jypsUsCkiBoXgAYJZHcngkSE/tcZPM5d7DZqcVRk4HhouD+pOWM3hJDFQMEh+SYE38PKGzVgNT+M= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hsX/BAJO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hsX/BAJO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 56A25C2BCB3; Mon, 13 Apr 2026 08:43:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776069821; bh=2poe5ZjkxZ66sftwyBZ2Y3HzPBAZQfToxK5G3ObgWvA=; h=Date:From:To:Cc:Subject:From; b=hsX/BAJOd7j9X8i9XbZ+Tfky2BjjupWubEsGyT5YvNiTD8ctD4+tqFi+VPHQgGk9g 8/w9ce90odJy0b2fVpGQ6ft2KPO3phpX5maWhgb80JKvCgRHX/HB85sDouzCYfOlZ/ l70ZYPXJE5c+Wm4a8MbTHXl2U7hJfXsJMKpwK5ZVackkon/Gs3dlM07DS1Ue6OvDJX Lhk8zJF9fMjdndUxaXTN5n/jc6CgpjS9wGnK6Qv9gcidEsly2GTh6jyi4E4kdfC3MM CcqHyv2iOiluPnih+D/mIlrT2PEkRP7cZg4mFbWbRsPor33z5bBDZTHTtMOLZ4+J9O 5JnEM/GjZgTtw== Date: Mon, 13 Apr 2026 10:43:35 +0200 From: Ingo Molnar To: Linus Torvalds Cc: linux-kernel@vger.kernel.org, Peter Zijlstra , Thomas Gleixner , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Mel Gorman , Tejun Heo , Valentin Schneider , Shrikanth Hegde , Andrea Righi , Joel Fernandes , John Stultz , K Prateek Nayak Subject: [GIT PULL] Scheduler changes for v7.1 Message-ID: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit Linus, Please pull the latest sched/core Git tree from: git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-core-2026-04-13 for you to fetch changes up to 78cde54ea5f03398f1cf6656de2472068f6da966: Scheduler changes for v7.1: Fair scheduling updates: - Skip SCHED_IDLE rq for SCHED_IDLE tasks (Christian Loehle) - Remove superfluous rcu_read_lock() in the wakeup path (K Prateek Nayak) - Simplify the entry condition for update_idle_cpu_scan() (K Prateek Nayak) - Simplify SIS_UTIL handling in select_idle_cpu() (K Prateek Nayak) - Avoid overflow in enqueue_entity() (K Prateek Nayak) - Update overutilized detection (Vincent Guittot) - Prevent negative lag increase during delayed dequeue (Vincent Guittot) - Clear buddies for preempt_short (Vincent Guittot) - Implement more complex proportional newidle balance (Peter Zijlstra) - Increase weight bits for avg_vruntime (Peter Zijlstra) - Use full weight to __calc_delta() (Peter Zijlstra) RT and DL scheduling updates: - Fix incorrect schedstats for rt and dl thread (Dengjun Su) - Skip group schedulable check with rt_group_sched=0 (Michal Koutný) - Move group schedulability check to sched_rt_global_validate() (Michal Koutný) - Add reporting of runtime left & abs deadline to sched_getattr() for DEADLINE tasks (Tommaso Cucinotta) Scheduling topology updates by K Prateek Nayak: - Compute sd_weight considering cpuset partitions - Extract "imb_numa_nr" calculation into a separate helper - Allocate per-CPU sched_domain_shared in s_data - Switch to assigning "sd->shared" from s_data - Remove sched_domain_shared allocation with sd_data Energy-aware scheduling updates: - Filter false overloaded_group case for EAS (Vincent Guittot) - PM: EM: Switch to rcu_dereference_all() in wakeup path (Dietmar Eggemann) Infrastructure updates: - Replace use of system_unbound_wq with system_dfl_wq (Marco Crivellari) Proxy scheduling updates by John Stultz: - Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr() - Minimise repeated sched_proxy_exec() checking - Fix potentially missing balancing with Proxy Exec - Fix and improve task::blocked_on et al handling - Add assert_balance_callbacks_empty() helper - Add logic to zap balancing callbacks if we pick again - Move attach_one_task() and attach_task() helpers to sched.h - Handle blocked-waiter migration (and return migration) - Add K Prateek Nayak to scheduler reviewers for proxy execution Misc cleanups and fixes by John Stultz, Joseph Salisbury, Peter Zijlstra, K Prateek Nayak, Michal Koutný, Randy Dunlap, Shrikanth Hegde, Vincent Guittot, Zhan Xusheng, Xie Yuanbin and Vincent Guittot. Thanks, Ingo ------------------> Christian Loehle (1): sched/fair: Skip SCHED_IDLE rq for SCHED_IDLE task Dengjun Su (1): sched: Fix incorrect schedstats for rt and dl thread Dietmar Eggemann (1): PM: EM: Switch to rcu_dereference_all() in wakeup path John Stultz (11): MAINTAINERS: Add K Prateek Nayak to scheduler reviewers sched: Make class_schedulers avoid pushing current, and get rid of proxy_tag_curr() sched: Minimise repeated sched_proxy_exec() checking sched: Fix potentially missing balancing with Proxy Exec locking: Add task::blocked_lock to serialize blocked_on state sched: Fix modifying donor->blocked on without proper locking sched/locking: Add special p->blocked_on==PROXY_WAKING value for proxy return-migration sched: Add assert_balance_callbacks_empty helper sched: Add logic to zap balance callbacks if we pick again sched: Move attach_one_task and attach_task helpers to sched.h sched: Handle blocked-waiter migration (and return migration) Joseph Salisbury (1): sched: Use u64 for bandwidth ratio calculations K Prateek Nayak (10): sched/topology: Compute sd_weight considering cpuset partitions sched/topology: Extract "imb_numa_nr" calculation into a separate helper sched/topology: Allocate per-CPU sched_domain_shared in s_data sched/topology: Switch to assigning "sd->shared" from s_data sched/topology: Remove sched_domain_shared allocation with sd_data sched/core: Check for rcu_read_lock_any_held() in idle_get_state() sched/fair: Remove superfluous rcu_read_lock() in the wakeup path sched/fair: Simplify the entry condition for update_idle_cpu_scan() sched/fair: Simplify SIS_UTIL handling in select_idle_cpu() sched/fair: Avoid overflow in enqueue_entity() Marco Crivellari (1): sched: Replace use of system_unbound_wq with system_dfl_wq Michal Koutný (3): sched/rt: Skip group schedulable check with rt_group_sched=0 sched/rt: Move group schedulability check to sched_rt_global_validate() sched/rt: Cleanup global RT bandwidth functions Peter Zijlstra (5): sched/fair: More complex proportional newidle balance sched/fair: Increase weight bits for avg_vruntime sched/fair: Revert 6d71a9c61604 ("sched/fair: Fix EEVDF entity placement bug causing scheduling lag") sched/fair: Use full weight to __calc_delta() sched/topology: Fix sched_domain_span() Randy Dunlap (1): sched/wait: correct kernel-doc descriptions Shrikanth Hegde (2): sched/fair: Get this cpu once in find_new_ilb() sched/core: Get this cpu once in ttwu_queue_cond() Tommaso Cucinotta (1): sched/deadline: Add reporting of runtime left & abs deadline to sched_getattr() for DEADLINE tasks Vincent Guittot (5): sched/fair: Update overutilized detection sched/fair: Filter false overloaded_group case for EAS sched/fair: Use sched_energy_enabled() sched/fair: Prevent negative lag increase during delayed dequeue sched/eevdf: Clear buddies for preempt_short Xie Yuanbin (2): x86/mm/tlb: Make enter_lazy_tlb() always inline on x86 sched/headers: Inline raw_spin_rq_unlock() Zhan Xusheng (1): sched/fair: Fix comma operator misuse in NUMA fault accounting MAINTAINERS | 1 + arch/x86/include/asm/mmu_context.h | 3 - arch/x86/include/asm/tlbflush.h | 26 ++ arch/x86/mm/tlb.c | 21 -- include/linux/energy_model.h | 4 +- include/linux/sched.h | 91 ++++--- include/linux/sched/topology.h | 26 +- include/linux/wait_bit.h | 4 +- include/uapi/linux/sched.h | 3 + init/init_task.c | 1 + kernel/fork.c | 1 + kernel/locking/mutex-debug.c | 4 +- kernel/locking/mutex.c | 40 ++- kernel/locking/mutex.h | 6 + kernel/locking/ww_mutex.h | 16 +- kernel/sched/core.c | 334 +++++++++++++++++++----- kernel/sched/deadline.c | 41 ++- kernel/sched/debug.c | 14 +- kernel/sched/ext.c | 4 +- kernel/sched/fair.c | 513 ++++++++++++++++++++++++++----------- kernel/sched/features.h | 3 + kernel/sched/rt.c | 64 ++--- kernel/sched/sched.h | 50 +++- kernel/sched/syscalls.c | 16 +- kernel/sched/topology.c | 279 ++++++++++++-------- 25 files changed, 1100 insertions(+), 465 deletions(-)