* [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked()
[not found] <20251119095525.12019-1-piliu@redhat.com>
@ 2025-11-19 9:55 ` Pingfan Liu
2025-11-19 20:51 ` Waiman Long
2025-11-20 1:12 ` Chen Ridong
2025-11-20 17:00 ` [PATCHv7 0/2] sched/deadline: Walk up cpuset hierarchy to decide root domain when hot-unplug Tejun Heo
1 sibling, 2 replies; 4+ messages in thread
From: Pingfan Liu @ 2025-11-19 9:55 UTC (permalink / raw)
To: cgroups
Cc: Pingfan Liu, Waiman Long, Chen Ridong, Peter Zijlstra, Juri Lelli,
Pierre Gondois, Ingo Molnar, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Tejun Heo, Johannes Weiner, mkoutny, linux-kernel
cpuset_cpus_allowed() uses a reader lock that is sleepable under RT,
which means it cannot be called inside raw_spin_lock_t context.
Introduce a new cpuset_cpus_allowed_locked() helper that performs the
same function as cpuset_cpus_allowed() except that the caller must have
acquired the cpuset_mutex so that no further locking will be needed.
Suggested-by: Waiman Long <longman@redhat.com>
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Michal Koutný" <mkoutny@suse.com>
Cc: linux-kernel@vger.kernel.org
To: cgroups@vger.kernel.org
---
include/linux/cpuset.h | 9 +++++++-
kernel/cgroup/cpuset.c | 51 +++++++++++++++++++++++++++++-------------
2 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 2ddb256187b51..a98d3330385c2 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -74,6 +74,7 @@ extern void inc_dl_tasks_cs(struct task_struct *task);
extern void dec_dl_tasks_cs(struct task_struct *task);
extern void cpuset_lock(void);
extern void cpuset_unlock(void);
+extern void cpuset_cpus_allowed_locked(struct task_struct *p, struct cpumask *mask);
extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
extern bool cpuset_cpu_is_isolated(int cpu);
@@ -195,10 +196,16 @@ static inline void dec_dl_tasks_cs(struct task_struct *task) { }
static inline void cpuset_lock(void) { }
static inline void cpuset_unlock(void) { }
+static inline void cpuset_cpus_allowed_locked(struct task_struct *p,
+ struct cpumask *mask)
+{
+ cpumask_copy(mask, task_cpu_possible_mask(p));
+}
+
static inline void cpuset_cpus_allowed(struct task_struct *p,
struct cpumask *mask)
{
- cpumask_copy(mask, task_cpu_possible_mask(p));
+ cpuset_cpus_allowed_locked(p, mask);
}
static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 52468d2c178a3..7a179a1a2e30a 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -4116,24 +4116,13 @@ void __init cpuset_init_smp(void)
BUG_ON(!cpuset_migrate_mm_wq);
}
-/**
- * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset.
- * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
- * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
- *
- * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
- * attached to the specified @tsk. Guaranteed to return some non-empty
- * subset of cpu_active_mask, even if this means going outside the
- * tasks cpuset, except when the task is in the top cpuset.
- **/
-
-void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
+/*
+ * Return cpus_allowed mask from a task's cpuset.
+ */
+static void __cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
{
- unsigned long flags;
struct cpuset *cs;
- spin_lock_irqsave(&callback_lock, flags);
-
cs = task_cs(tsk);
if (cs != &top_cpuset)
guarantee_active_cpus(tsk, pmask);
@@ -4153,7 +4142,39 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
if (!cpumask_intersects(pmask, cpu_active_mask))
cpumask_copy(pmask, possible_mask);
}
+}
+/**
+ * cpuset_cpus_allowed_locked - return cpus_allowed mask from a task's cpuset.
+ * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
+ * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
+ *
+ * Similir to cpuset_cpus_allowed() except that the caller must have acquired
+ * cpuset_mutex.
+ */
+void cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
+{
+ lockdep_assert_held(&cpuset_mutex);
+ __cpuset_cpus_allowed_locked(tsk, pmask);
+}
+
+/**
+ * cpuset_cpus_allowed - return cpus_allowed mask from a task's cpuset.
+ * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
+ * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
+ *
+ * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
+ * attached to the specified @tsk. Guaranteed to return some non-empty
+ * subset of cpu_active_mask, even if this means going outside the
+ * tasks cpuset, except when the task is in the top cpuset.
+ **/
+
+void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&callback_lock, flags);
+ __cpuset_cpus_allowed_locked(tsk, pmask);
spin_unlock_irqrestore(&callback_lock, flags);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked()
2025-11-19 9:55 ` [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked() Pingfan Liu
@ 2025-11-19 20:51 ` Waiman Long
2025-11-20 1:12 ` Chen Ridong
1 sibling, 0 replies; 4+ messages in thread
From: Waiman Long @ 2025-11-19 20:51 UTC (permalink / raw)
To: Pingfan Liu, cgroups
Cc: Chen Ridong, Peter Zijlstra, Juri Lelli, Pierre Gondois,
Ingo Molnar, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Tejun Heo,
Johannes Weiner, mkoutny, linux-kernel
On 11/19/25 4:55 AM, Pingfan Liu wrote:
> cpuset_cpus_allowed() uses a reader lock that is sleepable under RT,
> which means it cannot be called inside raw_spin_lock_t context.
>
> Introduce a new cpuset_cpus_allowed_locked() helper that performs the
> same function as cpuset_cpus_allowed() except that the caller must have
> acquired the cpuset_mutex so that no further locking will be needed.
>
> Suggested-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Pingfan Liu <piliu@redhat.com>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: "Michal Koutný" <mkoutny@suse.com>
> Cc: linux-kernel@vger.kernel.org
> To: cgroups@vger.kernel.org
> ---
> include/linux/cpuset.h | 9 +++++++-
> kernel/cgroup/cpuset.c | 51 +++++++++++++++++++++++++++++-------------
> 2 files changed, 44 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 2ddb256187b51..a98d3330385c2 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -74,6 +74,7 @@ extern void inc_dl_tasks_cs(struct task_struct *task);
> extern void dec_dl_tasks_cs(struct task_struct *task);
> extern void cpuset_lock(void);
> extern void cpuset_unlock(void);
> +extern void cpuset_cpus_allowed_locked(struct task_struct *p, struct cpumask *mask);
> extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
> extern bool cpuset_cpu_is_isolated(int cpu);
> @@ -195,10 +196,16 @@ static inline void dec_dl_tasks_cs(struct task_struct *task) { }
> static inline void cpuset_lock(void) { }
> static inline void cpuset_unlock(void) { }
>
> +static inline void cpuset_cpus_allowed_locked(struct task_struct *p,
> + struct cpumask *mask)
> +{
> + cpumask_copy(mask, task_cpu_possible_mask(p));
> +}
> +
> static inline void cpuset_cpus_allowed(struct task_struct *p,
> struct cpumask *mask)
> {
> - cpumask_copy(mask, task_cpu_possible_mask(p));
> + cpuset_cpus_allowed_locked(p, mask);
> }
>
> static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 52468d2c178a3..7a179a1a2e30a 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -4116,24 +4116,13 @@ void __init cpuset_init_smp(void)
> BUG_ON(!cpuset_migrate_mm_wq);
> }
>
> -/**
> - * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset.
> - * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> - * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
> - *
> - * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
> - * attached to the specified @tsk. Guaranteed to return some non-empty
> - * subset of cpu_active_mask, even if this means going outside the
> - * tasks cpuset, except when the task is in the top cpuset.
> - **/
> -
> -void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> +/*
> + * Return cpus_allowed mask from a task's cpuset.
> + */
> +static void __cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
> {
> - unsigned long flags;
> struct cpuset *cs;
>
> - spin_lock_irqsave(&callback_lock, flags);
> -
> cs = task_cs(tsk);
> if (cs != &top_cpuset)
> guarantee_active_cpus(tsk, pmask);
> @@ -4153,7 +4142,39 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> if (!cpumask_intersects(pmask, cpu_active_mask))
> cpumask_copy(pmask, possible_mask);
> }
> +}
>
> +/**
> + * cpuset_cpus_allowed_locked - return cpus_allowed mask from a task's cpuset.
> + * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> + * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
> + *
> + * Similir to cpuset_cpus_allowed() except that the caller must have acquired
> + * cpuset_mutex.
> + */
> +void cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
> +{
> + lockdep_assert_held(&cpuset_mutex);
> + __cpuset_cpus_allowed_locked(tsk, pmask);
> +}
> +
> +/**
> + * cpuset_cpus_allowed - return cpus_allowed mask from a task's cpuset.
> + * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> + * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
> + *
> + * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
> + * attached to the specified @tsk. Guaranteed to return some non-empty
> + * subset of cpu_active_mask, even if this means going outside the
> + * tasks cpuset, except when the task is in the top cpuset.
> + **/
> +
> +void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&callback_lock, flags);
> + __cpuset_cpus_allowed_locked(tsk, pmask);
> spin_unlock_irqrestore(&callback_lock, flags);
> }
>
Reviewed-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked()
2025-11-19 9:55 ` [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked() Pingfan Liu
2025-11-19 20:51 ` Waiman Long
@ 2025-11-20 1:12 ` Chen Ridong
1 sibling, 0 replies; 4+ messages in thread
From: Chen Ridong @ 2025-11-20 1:12 UTC (permalink / raw)
To: Pingfan Liu, cgroups
Cc: Waiman Long, Peter Zijlstra, Juri Lelli, Pierre Gondois,
Ingo Molnar, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
Ben Segall, Mel Gorman, Valentin Schneider, Tejun Heo,
Johannes Weiner, mkoutny, linux-kernel
On 2025/11/19 17:55, Pingfan Liu wrote:
> cpuset_cpus_allowed() uses a reader lock that is sleepable under RT,
> which means it cannot be called inside raw_spin_lock_t context.
>
> Introduce a new cpuset_cpus_allowed_locked() helper that performs the
> same function as cpuset_cpus_allowed() except that the caller must have
> acquired the cpuset_mutex so that no further locking will be needed.
>
> Suggested-by: Waiman Long <longman@redhat.com>
> Signed-off-by: Pingfan Liu <piliu@redhat.com>
> Cc: Waiman Long <longman@redhat.com>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: "Michal Koutný" <mkoutny@suse.com>
> Cc: linux-kernel@vger.kernel.org
> To: cgroups@vger.kernel.org
> ---
> include/linux/cpuset.h | 9 +++++++-
> kernel/cgroup/cpuset.c | 51 +++++++++++++++++++++++++++++-------------
> 2 files changed, 44 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 2ddb256187b51..a98d3330385c2 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -74,6 +74,7 @@ extern void inc_dl_tasks_cs(struct task_struct *task);
> extern void dec_dl_tasks_cs(struct task_struct *task);
> extern void cpuset_lock(void);
> extern void cpuset_unlock(void);
> +extern void cpuset_cpus_allowed_locked(struct task_struct *p, struct cpumask *mask);
> extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
> extern bool cpuset_cpu_is_isolated(int cpu);
> @@ -195,10 +196,16 @@ static inline void dec_dl_tasks_cs(struct task_struct *task) { }
> static inline void cpuset_lock(void) { }
> static inline void cpuset_unlock(void) { }
>
> +static inline void cpuset_cpus_allowed_locked(struct task_struct *p,
> + struct cpumask *mask)
> +{
> + cpumask_copy(mask, task_cpu_possible_mask(p));
> +}
> +
> static inline void cpuset_cpus_allowed(struct task_struct *p,
> struct cpumask *mask)
> {
> - cpumask_copy(mask, task_cpu_possible_mask(p));
> + cpuset_cpus_allowed_locked(p, mask);
> }
>
> static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p)
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 52468d2c178a3..7a179a1a2e30a 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -4116,24 +4116,13 @@ void __init cpuset_init_smp(void)
> BUG_ON(!cpuset_migrate_mm_wq);
> }
>
> -/**
> - * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset.
> - * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> - * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
> - *
> - * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
> - * attached to the specified @tsk. Guaranteed to return some non-empty
> - * subset of cpu_active_mask, even if this means going outside the
> - * tasks cpuset, except when the task is in the top cpuset.
> - **/
> -
> -void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> +/*
> + * Return cpus_allowed mask from a task's cpuset.
> + */
> +static void __cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
> {
> - unsigned long flags;
> struct cpuset *cs;
>
> - spin_lock_irqsave(&callback_lock, flags);
> -
> cs = task_cs(tsk);
> if (cs != &top_cpuset)
> guarantee_active_cpus(tsk, pmask);
> @@ -4153,7 +4142,39 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> if (!cpumask_intersects(pmask, cpu_active_mask))
> cpumask_copy(pmask, possible_mask);
> }
> +}
>
> +/**
> + * cpuset_cpus_allowed_locked - return cpus_allowed mask from a task's cpuset.
> + * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> + * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
> + *
> + * Similir to cpuset_cpus_allowed() except that the caller must have acquired
> + * cpuset_mutex.
> + */
> +void cpuset_cpus_allowed_locked(struct task_struct *tsk, struct cpumask *pmask)
> +{
> + lockdep_assert_held(&cpuset_mutex);
> + __cpuset_cpus_allowed_locked(tsk, pmask);
> +}
> +
> +/**
> + * cpuset_cpus_allowed - return cpus_allowed mask from a task's cpuset.
> + * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed.
> + * @pmask: pointer to struct cpumask variable to receive cpus_allowed set.
> + *
> + * Description: Returns the cpumask_var_t cpus_allowed of the cpuset
> + * attached to the specified @tsk. Guaranteed to return some non-empty
> + * subset of cpu_active_mask, even if this means going outside the
> + * tasks cpuset, except when the task is in the top cpuset.
> + **/
> +
> +void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(&callback_lock, flags);
> + __cpuset_cpus_allowed_locked(tsk, pmask);
> spin_unlock_irqrestore(&callback_lock, flags);
> }
>
LGTM
Reviewed-by: Chen Ridong <chenridong@huawei.com>
--
Best regards,
Ridong
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCHv7 0/2] sched/deadline: Walk up cpuset hierarchy to decide root domain when hot-unplug
[not found] <20251119095525.12019-1-piliu@redhat.com>
2025-11-19 9:55 ` [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked() Pingfan Liu
@ 2025-11-20 17:00 ` Tejun Heo
1 sibling, 0 replies; 4+ messages in thread
From: Tejun Heo @ 2025-11-20 17:00 UTC (permalink / raw)
To: Pingfan Liu
Cc: Waiman Long, Chen Ridong, Peter Zijlstra, Juri Lelli,
Pierre Gondois, Ingo Molnar, Vincent Guittot, Dietmar Eggemann,
Steven Rostedt, Ben Segall, Mel Gorman, Valentin Schneider,
Johannes Weiner, mkoutny, linux-kernel, cgroups
On Wed, Nov 19, 2025 at 05:55:23PM +0800, Pingfan Liu wrote:
> Pingfan Liu (2):
> cgroup/cpuset: Introduce cpuset_cpus_allowed_locked()
> sched/deadline: Walk up cpuset hierarchy to decide root domain when
> hot-unplug
Applied 1-2 to cgroup/for-6.19.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-11-20 17:00 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20251119095525.12019-1-piliu@redhat.com>
2025-11-19 9:55 ` [PATCHv7 1/2] cgroup/cpuset: Introduce cpuset_cpus_allowed_locked() Pingfan Liu
2025-11-19 20:51 ` Waiman Long
2025-11-20 1:12 ` Chen Ridong
2025-11-20 17:00 ` [PATCHv7 0/2] sched/deadline: Walk up cpuset hierarchy to decide root domain when hot-unplug Tejun Heo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).