From: Frederic Weisbecker <frederic@kernel.org>
To: Will Deacon <will@kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
Ard Biesheuvel <ardb@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Peter Zijlstra <peterz@infradead.org>,
Vincent Guittot <vincent.guittot@linaro.org>,
Thomas Gleixner <tglx@linutronix.de>,
Vlastimil Babka <vbabka@suse.cz>,
"Paul E. McKenney" <paulmck@kernel.org>,
Neeraj Upadhyay <neeraj.upadhyay@kernel.org>,
Joel Fernandes <joel@joelfernandes.org>,
Boqun Feng <boqun.feng@gmail.com>,
Zqiang <qiang.zhang1211@gmail.com>,
Uladzislau Rezki <urezki@gmail.com>,
rcu@vger.kernel.org, Michal Hocko <mhocko@suse.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 10/19] sched,arm64: Handle CPU isolation on last resort fallback rq selection
Date: Sun, 5 Jan 2025 00:22:33 +0100 [thread overview]
Message-ID: <Z3nCuQwP4qA66bcd@pavilion.home> (raw)
In-Reply-To: <20250103152702.GB3816@willie-the-truck>
Le Fri, Jan 03, 2025 at 03:27:03PM +0000, Will Deacon a écrit :
> On Wed, Dec 11, 2024 at 04:40:23PM +0100, Frederic Weisbecker wrote:
> > +const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
> > +{
> > + if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
> > + return housekeeping_cpumask(HK_TYPE_TICK);
> > +
> > + if (!is_compat_thread(task_thread_info(p)))
> > + return housekeeping_cpumask(HK_TYPE_TICK);
> > +
> > + return system_32bit_el0_cpumask();
> > +}
>
> I think this is correct, but damn what we really want to ask for is the
> intersection of task_cpu_possible_mask(p) and
> housekeeping_cpumask(HK_TYPE_TICK). It's a shame to duplicate the logic
> in task_cpu_possible_mask() here because we don't want to allocate a
> temporary mask.
Yeah I know :-/
>
> Maybe we could have a helper to consolidate things a little?
>
> static inline const struct cpumask *
> __task_cpu_possible_mask(struct task_struct *p, const struct cpumask *mask)
> {
> if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
> return mask;
>
> if (!is_compat_thread(task_thread_info(p)))
> return mask;
>
> return system_32bit_el0_cpumask();
> }
>
> Then we could call that from both task_cpu_possible_mask() and
> task_cpu_fallback_mask(), but passing 'cpu_possible_mask' and
> housekeeping_cpumask(HK_TYPE_TICK) for the 'mask' argument respectively?
Good point! How is the following updated version?
---
From: Frederic Weisbecker <frederic@kernel.org>
Date: Fri, 27 Sep 2024 00:48:59 +0200
Subject: [PATCH 2/2] sched,arm64: Handle CPU isolation on last resort fallback
rq selection
When a kthread or any other task has an affinity mask that is fully
offline or unallowed, the scheduler reaffines the task to all possible
CPUs as a last resort.
This default decision doesn't mix up very well with nohz_full CPUs that
are part of the possible cpumask but don't want to be disturbed by
unbound kthreads or even detached pinned user tasks.
Make the fallback affinity setting aware of nohz_full.
Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/include/asm/mmu_context.h | 14 +++++++++++---
arch/arm64/kernel/cpufeature.c | 5 +++++
include/linux/mmu_context.h | 1 +
kernel/sched/core.c | 2 +-
5 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index 8b4e5a3cd24c..cac5efc836c0 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -671,6 +671,7 @@ static inline bool supports_clearbhb(int scope)
}
const struct cpumask *system_32bit_el0_cpumask(void);
+const struct cpumask *fallback_32bit_el0_cpumask(void);
DECLARE_STATIC_KEY_FALSE(arm64_mismatched_32bit_el0);
static inline bool system_supports_32bit_el0(void)
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 48b3d9553b67..0dbe3b29049b 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -271,18 +271,26 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
}
static inline const struct cpumask *
-task_cpu_possible_mask(struct task_struct *p)
+__task_cpu_possible_mask(struct task_struct *p, const struct cpumask *mask)
{
if (!static_branch_unlikely(&arm64_mismatched_32bit_el0))
- return cpu_possible_mask;
+ return mask;
if (!is_compat_thread(task_thread_info(p)))
- return cpu_possible_mask;
+ return mask;
return system_32bit_el0_cpumask();
}
+
+static inline const struct cpumask *
+task_cpu_possible_mask(struct task_struct *p)
+{
+ return __task_cpu_possible_mask(p, cpu_possible_mask);
+}
#define task_cpu_possible_mask task_cpu_possible_mask
+const struct cpumask *task_cpu_fallback_mask(struct task_struct *p);
+
void verify_cpu_asid_bits(void);
void post_ttbr_update_workaround(void);
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3c87659c14db..a983e8660987 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -1642,6 +1642,11 @@ const struct cpumask *system_32bit_el0_cpumask(void)
return cpu_possible_mask;
}
+const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
+{
+ return __task_cpu_possible_mask(p, housekeeping_cpumask(HK_TYPE_TICK));
+}
+
static int __init parse_32bit_el0_param(char *str)
{
allow_mismatched_32bit_el0 = true;
diff --git a/include/linux/mmu_context.h b/include/linux/mmu_context.h
index bbaec80c78c5..ac01dc4eb2ce 100644
--- a/include/linux/mmu_context.h
+++ b/include/linux/mmu_context.h
@@ -24,6 +24,7 @@ static inline void leave_mm(void) { }
#ifndef task_cpu_possible_mask
# define task_cpu_possible_mask(p) cpu_possible_mask
# define task_cpu_possible(cpu, p) true
+# define task_cpu_fallback_mask(p) housekeeping_cpumask(HK_TYPE_TICK)
#else
# define task_cpu_possible(cpu, p) cpumask_test_cpu((cpu), task_cpu_possible_mask(p))
#endif
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95e40895a519..233b50b0e123 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3534,7 +3534,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p)
*
* More yuck to audit.
*/
- do_set_cpus_allowed(p, task_cpu_possible_mask(p));
+ do_set_cpus_allowed(p, task_cpu_fallback_mask(p));
state = fail;
break;
case fail:
--
2.46.0
next prev parent reply other threads:[~2025-01-04 23:22 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-11 15:40 [PATCH 00/19] kthread: Introduce preferred affinity v6 Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 01/19] arm/bL_switcher: Use kthread_run_on_cpu() Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 02/19] firmware: stratix10-svc: " Frederic Weisbecker
2024-12-17 0:37 ` Dinh Nguyen
2024-12-11 15:40 ` [PATCH 03/19] scsi: bnx2fc: Use kthread_create_on_cpu() Frederic Weisbecker
2025-01-02 17:54 ` Martin K. Petersen
2024-12-11 15:40 ` [PATCH 04/19] scsi: bnx2i: " Frederic Weisbecker
2025-01-02 17:55 ` Martin K. Petersen
2024-12-11 15:40 ` [PATCH 05/19] scsi: qedi: " Frederic Weisbecker
2025-01-02 17:55 ` Martin K. Petersen
2024-12-11 15:40 ` [PATCH 06/19] soc/qman: test: Use kthread_run_on_cpu() Frederic Weisbecker
2024-12-13 7:27 ` LEROY Christophe
2024-12-13 11:46 ` Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 07/19] kallsyms: " Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 08/19] lib: test_objpool: " Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 09/19] arm64: Exclude nohz_full CPUs from 32bits el0 support Frederic Weisbecker
2025-01-03 15:20 ` Will Deacon
2025-01-04 23:18 ` Frederic Weisbecker
2025-01-08 15:49 ` Will Deacon
2024-12-11 15:40 ` [PATCH 10/19] sched,arm64: Handle CPU isolation on last resort fallback rq selection Frederic Weisbecker
2025-01-03 15:27 ` Will Deacon
2025-01-04 23:22 ` Frederic Weisbecker [this message]
2025-01-08 15:50 ` Will Deacon
2024-12-11 15:40 ` [PATCH 11/19] kthread: Make sure kthread hasn't started while binding it Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 12/19] kthread: Default affine kthread to its preferred NUMA node Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 13/19] mm: Create/affine kcompactd to its preferred node Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 14/19] mm: Create/affine kswapd " Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 15/19] kthread: Implement preferred affinity Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 16/19] rcu: Use kthread preferred affinity for RCU boost Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 17/19] kthread: Unify kthread_create_on_cpu() and kthread_create_worker_on_cpu() automatic format Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 18/19] treewide: Introduce kthread_run_worker[_on_cpu]() Frederic Weisbecker
2024-12-11 15:40 ` [PATCH 19/19] rcu: Use kthread preferred affinity for RCU exp kworkers Frederic Weisbecker
2025-01-10 21:16 ` (subset) [PATCH 00/19] kthread: Introduce preferred affinity v6 Martin K. Petersen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z3nCuQwP4qA66bcd@pavilion.home \
--to=frederic@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=boqun.feng@gmail.com \
--cc=catalin.marinas@arm.com \
--cc=joel@joelfernandes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=maz@kernel.org \
--cc=mhocko@suse.com \
--cc=neeraj.upadhyay@kernel.org \
--cc=oliver.upton@linux.dev \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=qiang.zhang1211@gmail.com \
--cc=rcu@vger.kernel.org \
--cc=tglx@linutronix.de \
--cc=urezki@gmail.com \
--cc=vbabka@suse.cz \
--cc=vincent.guittot@linaro.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox