From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF39E1ACEDE for ; Fri, 1 May 2026 13:22:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.92.199 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777641730; cv=none; b=spjnsPiTTMbNUtW9A02UmNl4cOsFK207bzuixtgNFGMm/uIKSZw+0yX6oZ4kTCNwtaKX7dFdXdXkzLHsM+zwGOG+sHCWDz2BKU1/gZ9mWzE5jrVR0wfGZaDu7klE4g/lH2og6wB4H/fwBpciw81uZJV418QGQ6/IxfMrSL3bX1w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777641730; c=relaxed/simple; bh=/T+hvMNV6cBFL+nGJyRK+rcRc0V16dBJMuYOp5snmyE=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=QtgWjCUXUf/8vG9DRaM06zMX8htTCT1rcSFGOXy2WKdkTWBa5n5CMSJNyUUrS9j2HwAJXzPCXbxCzi/m0cw0IpLGThOJxRXidIb9DfXw1uEzJvDoYV4oYmiTA5mmX7al+dKFQl05ANaSq1xhNQ3wA2nKk8QfFM/xpC34/W8zyQA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=W6QRF1l+; arc=none smtp.client-ip=90.155.92.199 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="W6QRF1l+" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=qQfNswdTYwNo1JYGSJkAI8Tiuz5cVC8qJ/56eV6FJzA=; b=W6QRF1l+hWG3iw2SxhkIPICIoK cU2Tk8iO2baEwnAxZ79vLC2/OMq4ejoXB8qjvT9zLeH7KwPyD9LN9siZE/qKPXYakKrSYT+hvaPnv +VOE0sZqb7FE9dVP/LCkfcwODb4EIuVtSNmSnHFxm8OyJJtRvs7eYrJGJ2caUoivjwFnmnss+StIh MPf9VOuGFp5/8g8B1ZjWM+/NSxZ+RpclzzGfBrL2l5LG1MK46cvP7rwhYuO2WA/QXN2egkZtPK0sP Rec7B5SuqN0kPkZQc28i7731uvvc6qNIAam1m7TYGF3aZ8ByqoQdSHqtUH4hDv958pnuKn0IKHuL4 RZCZb+qg==; Received: from 77-249-17-252.cable.dynamic.v4.ziggo.nl ([77.249.17.252] helo=noisy.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.98.2 #2 (Red Hat Linux)) id 1wInoP-00000008xx2-1wYt; Fri, 01 May 2026 13:21:45 +0000 Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000) id D3C283006CD; Fri, 01 May 2026 15:21:43 +0200 (CEST) Date: Fri, 1 May 2026 15:21:43 +0200 From: Peter Zijlstra To: John Stultz Cc: LKML , Vineeth Pillai , Sonam Sanju , Sean Christopherson , Kunwu Chan , Tejun Heo , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , Daniel Lezcano , Suleiman Souhlal , kuyo chang , hupu , kernel-team@android.com Subject: Re: [PATCH v2 1/2] sched: proxy-exec: Close race causing workqueue work being delayed Message-ID: <20260501132143.GC1026330@noisy.programming.kicks-ass.net> References: <20260430215103.2978955-1-jstultz@google.com> <20260430215103.2978955-2-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260430215103.2978955-2-jstultz@google.com> Sorry for being late, I was unwell for a few days :/ On Thu, Apr 30, 2026 at 09:50:46PM +0000, John Stultz wrote: > diff --git a/include/linux/sched.h b/include/linux/sched.h > index 368c7b4d7cb51..8b9e971d98f67 100644 > --- a/include/linux/sched.h > +++ b/include/linux/sched.h > @@ -2183,18 +2183,56 @@ extern int __cond_resched_rwlock_write(rwlock_t *lock) __must_hold(lock); > #ifndef CONFIG_PREEMPT_RT > > /* > - * With proxy exec, if a task has been proxy-migrated, it may be a donor > - * on a cpu that it can't actually run on. Thus we need a special state > - * to denote that the task is being woken, but that it needs to be > - * evaluated for return-migration before it is run. So if the task is > - * blocked_on PROXY_WAKING, return migrate it before running it. > + * The proxy exec blocked_on pointer value uses the low bit as a latch > + * value which clarifies if the blocked_on value is used for proxying or > + * not. > + * > + * The state machine looks something like > + * NULL -> ptr:unlatched -> ptr:latched -> PROXY_WAKING -> NULL > + * > + * With some additional transitions: > + * ptr:unlatched -> NULL (done on current, or via set_task_blocked_on_waking()) > + * ptr:latched -> NULL (done only on current) > + * > + * 1) NULL and ptr:unlatched are effectively equivalent, no proxying will occur > + * 2) ptr:latched is the state when proxying will occur > + * 3) PROXY_WAKING is used when the task is being woken to ensure we > + * return-migrate proxy-migrated tasks before running them (note it has > + * the latch bit set). > */ > -#define PROXY_WAKING ((struct mutex *)(-1L)) > +#define PROXY_BLOCKED_LATCH (1UL) > +#define PROXY_BLOCKED_ON_MASK(x) ((struct mutex *)((unsigned long)(x) & ~PROXY_BLOCKED_LATCH)) > +#define PROXY_WAKING ((struct mutex *)(-1L)) /* PROXY_WAKING has LATCH bit set */ Urgh, please no. You're making it needlessly complicated. There really are two separate states, set by two different chains of logic: - the blocked_on link, set by the blocking primitive (mutex) - the is_blocked state, set by the scheduler when logically blocking the task. by munging them together like that, you also inherit that blocked_lock into contexts that really don't need it, and you're also sprinkling more of that sched_proxy_exec() stuff around. If we keep them nicely separated, none of that happens, and additionally, we might be able to get rid of the p->se.sched_delayed (ab)use in the core code (eventually). Does something like the below really not work? --- diff --git a/include/linux/sched.h b/include/linux/sched.h index 368c7b4d7cb5..0bd5da8360f3 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -846,7 +846,11 @@ struct task_struct { struct alloc_tag *alloc_tag; #endif - int on_cpu; + u8 on_cpu; + u8 on_rq; + u8 is_blocked; + u8 __pad; + struct __call_single_node wake_entry; unsigned int wakee_flips; unsigned long wakee_flip_decay_ts; @@ -861,7 +865,6 @@ struct task_struct { */ int recent_used_cpu; int wake_cpu; - int on_rq; int prio; int static_prio; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b8871449d3c6..f679d65d98a3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -615,6 +615,12 @@ EXPORT_SYMBOL(__trace_set_current_state); * [ The astute reader will observe that it is possible for two tasks on one * CPU to have ->on_cpu = 1 at the same time. ] * + * p->is_blocked <- { 0, 1 }: + * + * is set by try_to_block_task() and cleared by ttwu_do_wakeup() and tracks + * if the task is blocked. Tradidionally this would mirror p->on_rq, however + * due things like DELAY_DEQUEUE and PROXY_EXEC, this can diverge. + * * task_cpu(p): is changed by set_task_cpu(), the rules are: * * - Don't call set_task_cpu() on a blocked task: @@ -3685,6 +3691,7 @@ ttwu_stat(struct task_struct *p, int cpu, int wake_flags) */ static inline void ttwu_do_wakeup(struct task_struct *p) { + p->is_blocked = 0; WRITE_ONCE(p->__state, TASK_RUNNING); trace_sched_wakeup(p); } @@ -4173,6 +4180,7 @@ int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) * it disabling IRQs (this allows not taking ->pi_lock). */ WARN_ON_ONCE(p->se.sched_delayed); + WARN_ON_ONCE(p->is_blocked); if (!ttwu_state_match(p, state, &success)) goto out; @@ -4463,6 +4471,7 @@ static void __sched_fork(u64 clone_flags, struct task_struct *p) /* A delayed task cannot be in clone(). */ WARN_ON_ONCE(p->se.sched_delayed); + WARN_ON_ONCE(p->is_blocked); #ifdef CONFIG_FAIR_GROUP_SCHED p->se.cfs_rq = NULL; @@ -6593,6 +6602,8 @@ static bool try_to_block_task(struct rq *rq, struct task_struct *p, return false; } + p->is_blocked = 1; + /* * We check should_block after signal_pending because we * will want to wake the task in that case. But if @@ -7108,7 +7119,7 @@ static void __sched notrace __schedule(int sched_mode) struct task_struct *prev_donor = rq->donor; rq_set_donor(rq, next); - if (unlikely(next->blocked_on)) { + if (unlikely(next->is_blocked && next->blocked_on)) { next = find_proxy_task(rq, next, &rf); if (!next) { zap_balance_callbacks(rq);