From: David Vernet <void@manifault.com>
To: linux-kernel@vger.kernel.org
Cc: peterz@infradead.org, mingo@redhat.com, juri.lelli@redhat.com,
vincent.guittot@linaro.org, dietmar.eggemann@arm.com,
rostedt@goodmis.org, bsegall@google.com, mgorman@suse.de,
bristot@redhat.com, vschneid@redhat.com, tj@kernel.org,
roman.gushchin@linux.dev, gautham.shenoy@amd.com,
kprateek.nayak@amd.com, aaron.lu@intel.com,
wuyun.abel@bytedance.com, kernel-team@meta.com
Subject: [PATCH v3 2/7] sched: Move is_cpu_allowed() into sched.h
Date: Wed, 9 Aug 2023 17:12:13 -0500 [thread overview]
Message-ID: <20230809221218.163894-3-void@manifault.com> (raw)
In-Reply-To: <20230809221218.163894-1-void@manifault.com>
is_cpu_allowed() exists as a static inline function in core.c. The
functionality offered by is_cpu_allowed() is useful to scheduling
policies as well, e.g. to determine whether a runnable task can be
migrated to another core that would otherwise go idle.
Let's move it to sched.h.
Signed-off-by: David Vernet <void@manifault.com>
---
kernel/sched/core.c | 31 -------------------------------
kernel/sched/sched.h | 31 +++++++++++++++++++++++++++++++
2 files changed, 31 insertions(+), 31 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 394e216b9d37..dd6412a49263 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -48,7 +48,6 @@
#include <linux/kcov.h>
#include <linux/kprobes.h>
#include <linux/llist_api.h>
-#include <linux/mmu_context.h>
#include <linux/mmzone.h>
#include <linux/mutex_api.h>
#include <linux/nmi.h>
@@ -2470,36 +2469,6 @@ static inline bool rq_has_pinned_tasks(struct rq *rq)
return rq->nr_pinned;
}
-/*
- * Per-CPU kthreads are allowed to run on !active && online CPUs, see
- * __set_cpus_allowed_ptr() and select_fallback_rq().
- */
-static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
-{
- /* When not in the task's cpumask, no point in looking further. */
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
- return false;
-
- /* migrate_disabled() must be allowed to finish. */
- if (is_migration_disabled(p))
- return cpu_online(cpu);
-
- /* Non kernel threads are not allowed during either online or offline. */
- if (!(p->flags & PF_KTHREAD))
- return cpu_active(cpu) && task_cpu_possible(cpu, p);
-
- /* KTHREAD_IS_PER_CPU is always allowed. */
- if (kthread_is_per_cpu(p))
- return cpu_online(cpu);
-
- /* Regular kernel threads don't get to stay during offline. */
- if (cpu_dying(cpu))
- return false;
-
- /* But are allowed during online. */
- return cpu_online(cpu);
-}
-
/*
* This is how migration works:
*
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 69b100267fd0..88cca7cc00cf 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -44,6 +44,7 @@
#include <linux/lockdep.h>
#include <linux/minmax.h>
#include <linux/mm.h>
+#include <linux/mmu_context.h>
#include <linux/module.h>
#include <linux/mutex_api.h>
#include <linux/plist.h>
@@ -1203,6 +1204,36 @@ static inline bool is_migration_disabled(struct task_struct *p)
#endif
}
+/*
+ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
+ * __set_cpus_allowed_ptr() and select_fallback_rq().
+ */
+static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
+{
+ /* When not in the task's cpumask, no point in looking further. */
+ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
+ return false;
+
+ /* migrate_disabled() must be allowed to finish. */
+ if (is_migration_disabled(p))
+ return cpu_online(cpu);
+
+ /* Non kernel threads are not allowed during either online or offline. */
+ if (!(p->flags & PF_KTHREAD))
+ return cpu_active(cpu) && task_cpu_possible(cpu, p);
+
+ /* KTHREAD_IS_PER_CPU is always allowed. */
+ if (kthread_is_per_cpu(p))
+ return cpu_online(cpu);
+
+ /* Regular kernel threads don't get to stay during offline. */
+ if (cpu_dying(cpu))
+ return false;
+
+ /* But are allowed during online. */
+ return cpu_online(cpu);
+}
+
DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
--
2.41.0
next prev parent reply other threads:[~2023-08-09 22:12 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-09 22:12 [PATCH v3 0/7] sched: Implement shared runqueue in CFS David Vernet
2023-08-09 22:12 ` [PATCH v3 1/7] sched: Expose move_queued_task() from core.c David Vernet
2023-08-09 22:12 ` David Vernet [this message]
2023-08-09 22:12 ` [PATCH v3 3/7] sched: Check cpu_active() earlier in newidle_balance() David Vernet
2023-08-09 22:12 ` [PATCH v3 4/7] sched: Enable sched_feat callbacks on enable/disable David Vernet
2023-08-09 22:12 ` [PATCH v3 5/7] sched/fair: Add SHARED_RUNQ sched feature and skeleton calls David Vernet
2023-08-09 22:12 ` [PATCH v3 6/7] sched: Implement shared runqueue in CFS David Vernet
2023-08-10 7:11 ` kernel test robot
2023-08-10 7:41 ` kernel test robot
2023-08-30 6:46 ` K Prateek Nayak
2023-08-31 1:34 ` David Vernet
2023-08-31 3:47 ` K Prateek Nayak
2023-08-09 22:12 ` [PATCH v3 7/7] sched: Shard per-LLC shared runqueues David Vernet
2023-08-09 23:46 ` kernel test robot
2023-08-10 0:12 ` David Vernet
2023-08-10 7:11 ` kernel test robot
2023-08-30 6:17 ` Chen Yu
2023-08-31 0:01 ` David Vernet
2023-08-31 10:45 ` Chen Yu
2023-08-31 19:14 ` David Vernet
2023-09-23 6:35 ` Chen Yu
2023-08-17 8:42 ` [PATCH v3 0/7] sched: Implement shared runqueue in CFS Gautham R. Shenoy
2023-08-18 5:03 ` David Vernet
2023-08-18 8:49 ` Gautham R. Shenoy
2023-08-24 11:14 ` Gautham R. Shenoy
2023-08-24 22:51 ` David Vernet
2023-08-30 9:56 ` K Prateek Nayak
2023-08-31 2:32 ` David Vernet
2023-08-31 4:21 ` K Prateek Nayak
2023-08-31 10:45 ` [RFC PATCH 0/3] DO NOT MERGE: Breaking down the experimantal diff K Prateek Nayak
2023-08-31 10:45 ` [RFC PATCH 1/3] sched/fair: Move SHARED_RUNQ related structs and definitions into sched.h K Prateek Nayak
2023-08-31 10:45 ` [RFC PATCH 2/3] sched/fair: Improve integration of SHARED_RUNQ feature within newidle_balance K Prateek Nayak
2023-08-31 18:45 ` David Vernet
2023-08-31 19:47 ` K Prateek Nayak
2023-08-31 10:45 ` [RFC PATCH 3/3] sched/fair: Add a per-shard overload flag K Prateek Nayak
2023-08-31 19:11 ` David Vernet
2023-08-31 20:23 ` K Prateek Nayak
2023-09-29 17:01 ` David Vernet
2023-10-04 4:21 ` K Prateek Nayak
2023-10-04 17:20 ` David Vernet
2023-10-05 3:50 ` K Prateek Nayak
2023-09-27 4:23 ` K Prateek Nayak
2023-09-27 6:59 ` Chen Yu
2023-09-27 8:36 ` K Prateek Nayak
2023-09-28 8:41 ` Chen Yu
2023-10-03 21:05 ` David Vernet
2023-10-07 2:10 ` Chen Yu
2023-09-27 13:08 ` David Vernet
2023-11-27 8:28 ` [PATCH v3 0/7] sched: Implement shared runqueue in CFS Aboorva Devarajan
2023-11-27 19:49 ` David Vernet
2023-12-07 6:00 ` Aboorva Devarajan
2023-12-04 19:30 ` David Vernet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230809221218.163894-3-void@manifault.com \
--to=void@manifault.com \
--cc=aaron.lu@intel.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=dietmar.eggemann@arm.com \
--cc=gautham.shenoy@amd.com \
--cc=juri.lelli@redhat.com \
--cc=kernel-team@meta.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@suse.de \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=roman.gushchin@linux.dev \
--cc=rostedt@goodmis.org \
--cc=tj@kernel.org \
--cc=vincent.guittot@linaro.org \
--cc=vschneid@redhat.com \
--cc=wuyun.abel@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox