* [PATCH -next v4 0/3] some optimization for cpuset
@ 2025-08-18 6:41 Chen Ridong
2025-08-18 6:41 ` [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup Chen Ridong
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Chen Ridong @ 2025-08-18 6:41 UTC (permalink / raw)
To: longman, tj, hannes, mkoutny; +Cc: cgroups, linux-kernel, lujialin4, chenridong
From: Chen Ridong <chenridong@huawei.com>
This patch series contains several cpuset improvements:
1. Decouple cpuset and tmpmasks allocation/freeing.
2. Add cpuset_full_[un]lock helpers.
---
v4:
- update the commit message for patch 3.
v3:
- fix typos and comment errors.
- rename cpus_read_cpuset_[un]lock to cpuset_full_[un]lock
v2:
- dropped guard helper approach, nusing new helper instead.
- added patches for decoupling cpuset/tmpmasks allocation
Chen Ridong (3):
cpuset: decouple tmpmasks and cpumasks freeing in cgroup
cpuset: separate tmpmasks and cpuset allocation logic
cpuset: add helpers for cpus read and cpuset_mutex locks
kernel/cgroup/cpuset-internal.h | 2 +
kernel/cgroup/cpuset-v1.c | 12 +-
kernel/cgroup/cpuset.c | 220 +++++++++++++++++---------------
3 files changed, 122 insertions(+), 112 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup
2025-08-18 6:41 [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
@ 2025-08-18 6:41 ` Chen Ridong
2025-08-24 17:07 ` Waiman Long
2025-08-18 6:41 ` [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
` (2 subsequent siblings)
3 siblings, 1 reply; 9+ messages in thread
From: Chen Ridong @ 2025-08-18 6:41 UTC (permalink / raw)
To: longman, tj, hannes, mkoutny; +Cc: cgroups, linux-kernel, lujialin4, chenridong
From: Chen Ridong <chenridong@huawei.com>
Currently, free_cpumasks() can free both tmpmasks and cpumasks of a cpuset
(cs). However, these two operations are not logically coupled. To improve
code clarity:
1. Move cpumask freeing to free_cpuset()
2. Rename free_cpumasks() to free_tmpmasks()
This change enforces the single responsibility principle.
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
kernel/cgroup/cpuset.c | 32 +++++++++++++-------------------
1 file changed, 13 insertions(+), 19 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3466ebbf1016..aebda14cc67f 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -459,23 +459,14 @@ static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
}
/**
- * free_cpumasks - free cpumasks in a tmpmasks structure
- * @cs: the cpuset that have cpumasks to be free.
+ * free_tmpmasks - free cpumasks in a tmpmasks structure
* @tmp: the tmpmasks structure pointer
*/
-static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
+static inline void free_tmpmasks(struct tmpmasks *tmp)
{
- if (cs) {
- free_cpumask_var(cs->cpus_allowed);
- free_cpumask_var(cs->effective_cpus);
- free_cpumask_var(cs->effective_xcpus);
- free_cpumask_var(cs->exclusive_cpus);
- }
- if (tmp) {
- free_cpumask_var(tmp->new_cpus);
- free_cpumask_var(tmp->addmask);
- free_cpumask_var(tmp->delmask);
- }
+ free_cpumask_var(tmp->new_cpus);
+ free_cpumask_var(tmp->addmask);
+ free_cpumask_var(tmp->delmask);
}
/**
@@ -508,7 +499,10 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
*/
static inline void free_cpuset(struct cpuset *cs)
{
- free_cpumasks(cs, NULL);
+ free_cpumask_var(cs->cpus_allowed);
+ free_cpumask_var(cs->effective_cpus);
+ free_cpumask_var(cs->effective_xcpus);
+ free_cpumask_var(cs->exclusive_cpus);
kfree(cs);
}
@@ -2427,7 +2421,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (cs->partition_root_state)
update_partition_sd_lb(cs, old_prs);
out_free:
- free_cpumasks(NULL, &tmp);
+ free_tmpmasks(&tmp);
return retval;
}
@@ -2530,7 +2524,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (cs->partition_root_state)
update_partition_sd_lb(cs, old_prs);
- free_cpumasks(NULL, &tmp);
+ free_tmpmasks(&tmp);
return 0;
}
@@ -2983,7 +2977,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
notify_partition_change(cs, old_prs);
if (force_sd_rebuild)
rebuild_sched_domains_locked();
- free_cpumasks(NULL, &tmpmask);
+ free_tmpmasks(&tmpmask);
return 0;
}
@@ -4006,7 +4000,7 @@ static void cpuset_handle_hotplug(void)
if (force_sd_rebuild)
rebuild_sched_domains_cpuslocked();
- free_cpumasks(NULL, ptmp);
+ free_tmpmasks(ptmp);
}
void cpuset_update_active_cpus(void)
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic
2025-08-18 6:41 [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
2025-08-18 6:41 ` [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup Chen Ridong
@ 2025-08-18 6:41 ` Chen Ridong
2025-08-24 17:05 ` Waiman Long
2025-08-18 6:41 ` [PATCH -next v4 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
2025-08-23 2:08 ` [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
3 siblings, 1 reply; 9+ messages in thread
From: Chen Ridong @ 2025-08-18 6:41 UTC (permalink / raw)
To: longman, tj, hannes, mkoutny; +Cc: cgroups, linux-kernel, lujialin4, chenridong
From: Chen Ridong <chenridong@huawei.com>
The original alloc_cpumasks() served dual purposes: allocating cpumasks
for both temporary masks (tmpmasks) and cpuset structures. This patch:
1. Decouples these allocation paths for better code clarity
2. Introduces dedicated alloc_tmpmasks() and dup_or_alloc_cpuset()
functions
3. Maintains symmetric pairing:
- alloc_tmpmasks() ↔ free_tmpmasks()
- dup_or_alloc_cpuset() ↔ free_cpuset()
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
kernel/cgroup/cpuset.c | 128 ++++++++++++++++++++++-------------------
1 file changed, 69 insertions(+), 59 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index aebda14cc67f..d5588a1fef60 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -411,51 +411,46 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
}
/**
- * alloc_cpumasks - allocate three cpumasks for cpuset
- * @cs: the cpuset that have cpumasks to be allocated.
- * @tmp: the tmpmasks structure pointer
- * Return: 0 if successful, -ENOMEM otherwise.
+ * alloc_cpumasks - Allocate an array of cpumask variables
+ * @pmasks: Pointer to array of cpumask_var_t pointers
+ * @size: Number of cpumasks to allocate
*
- * Only one of the two input arguments should be non-NULL.
+ * Allocates @size cpumasks and initializes them to empty. Returns 0 on
+ * success, -ENOMEM on allocation failure. On failure, any previously
+ * allocated cpumasks are freed.
*/
-static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
+static inline int alloc_cpumasks(cpumask_var_t *pmasks[], u32 size)
{
- cpumask_var_t *pmask1, *pmask2, *pmask3, *pmask4;
+ int i;
- if (cs) {
- pmask1 = &cs->cpus_allowed;
- pmask2 = &cs->effective_cpus;
- pmask3 = &cs->effective_xcpus;
- pmask4 = &cs->exclusive_cpus;
- } else {
- pmask1 = &tmp->new_cpus;
- pmask2 = &tmp->addmask;
- pmask3 = &tmp->delmask;
- pmask4 = NULL;
+ for (i = 0; i < size; i++) {
+ if (!zalloc_cpumask_var(pmasks[i], GFP_KERNEL)) {
+ while (--i >= 0)
+ free_cpumask_var(*pmasks[i]);
+ return -ENOMEM;
+ }
}
-
- if (!zalloc_cpumask_var(pmask1, GFP_KERNEL))
- return -ENOMEM;
-
- if (!zalloc_cpumask_var(pmask2, GFP_KERNEL))
- goto free_one;
-
- if (!zalloc_cpumask_var(pmask3, GFP_KERNEL))
- goto free_two;
-
- if (pmask4 && !zalloc_cpumask_var(pmask4, GFP_KERNEL))
- goto free_three;
-
-
return 0;
+}
-free_three:
- free_cpumask_var(*pmask3);
-free_two:
- free_cpumask_var(*pmask2);
-free_one:
- free_cpumask_var(*pmask1);
- return -ENOMEM;
+/**
+ * alloc_tmpmasks - Allocate temporary cpumasks for cpuset operations.
+ * @tmp: Pointer to tmpmasks structure to populate
+ * Return: 0 on success, -ENOMEM on allocation failure
+ */
+static inline int alloc_tmpmasks(struct tmpmasks *tmp)
+{
+ /*
+ * Array of pointers to the three cpumask_var_t fields in tmpmasks.
+ * Note: Array size must match actual number of masks (3)
+ */
+ cpumask_var_t *pmask[3] = {
+ &tmp->new_cpus,
+ &tmp->addmask,
+ &tmp->delmask
+ };
+
+ return alloc_cpumasks(pmask, ARRAY_SIZE(pmask));
}
/**
@@ -470,26 +465,46 @@ static inline void free_tmpmasks(struct tmpmasks *tmp)
}
/**
- * alloc_trial_cpuset - allocate a trial cpuset
- * @cs: the cpuset that the trial cpuset duplicates
+ * dup_or_alloc_cpuset - Duplicate or allocate a new cpuset
+ * @cs: Source cpuset to duplicate (NULL for a fresh allocation)
+ *
+ * Creates a new cpuset by either:
+ * 1. Duplicating an existing cpuset (if @cs is non-NULL), or
+ * 2. Allocating a fresh cpuset with zero-initialized masks (if @cs is NULL)
+ *
+ * Return: Pointer to newly allocated cpuset on success, NULL on failure
*/
-static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
+static struct cpuset *dup_or_alloc_cpuset(struct cpuset *cs)
{
struct cpuset *trial;
- trial = kmemdup(cs, sizeof(*cs), GFP_KERNEL);
+ /* Allocate base structure */
+ trial = cs ? kmemdup(cs, sizeof(*cs), GFP_KERNEL) :
+ kzalloc(sizeof(*cs), GFP_KERNEL);
if (!trial)
return NULL;
- if (alloc_cpumasks(trial, NULL)) {
+ /* Setup cpumask pointer array */
+ cpumask_var_t *pmask[4] = {
+ &trial->cpus_allowed,
+ &trial->effective_cpus,
+ &trial->effective_xcpus,
+ &trial->exclusive_cpus
+ };
+
+ if (alloc_cpumasks(pmask, ARRAY_SIZE(pmask))) {
kfree(trial);
return NULL;
}
- cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
- cpumask_copy(trial->effective_cpus, cs->effective_cpus);
- cpumask_copy(trial->effective_xcpus, cs->effective_xcpus);
- cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
+ /* Copy masks if duplicating */
+ if (cs) {
+ cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
+ cpumask_copy(trial->effective_cpus, cs->effective_cpus);
+ cpumask_copy(trial->effective_xcpus, cs->effective_xcpus);
+ cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
+ }
+
return trial;
}
@@ -2332,7 +2347,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed))
return 0;
- if (alloc_cpumasks(NULL, &tmp))
+ if (alloc_tmpmasks(&tmp))
return -ENOMEM;
if (old_prs) {
@@ -2476,7 +2491,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
if (retval)
return retval;
- if (alloc_cpumasks(NULL, &tmp))
+ if (alloc_tmpmasks(&tmp))
return -ENOMEM;
if (old_prs) {
@@ -2820,7 +2835,7 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
int spread_flag_changed;
int err;
- trialcs = alloc_trial_cpuset(cs);
+ trialcs = dup_or_alloc_cpuset(cs);
if (!trialcs)
return -ENOMEM;
@@ -2881,7 +2896,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
if (new_prs && is_prs_invalid(old_prs))
old_prs = PRS_MEMBER;
- if (alloc_cpumasks(NULL, &tmpmask))
+ if (alloc_tmpmasks(&tmpmask))
return -ENOMEM;
err = update_partition_exclusive_flag(cs, new_prs);
@@ -3223,7 +3238,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
if (!is_cpuset_online(cs))
goto out_unlock;
- trialcs = alloc_trial_cpuset(cs);
+ trialcs = dup_or_alloc_cpuset(cs);
if (!trialcs) {
retval = -ENOMEM;
goto out_unlock;
@@ -3456,15 +3471,10 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)
if (!parent_css)
return &top_cpuset.css;
- cs = kzalloc(sizeof(*cs), GFP_KERNEL);
+ cs = dup_or_alloc_cpuset(NULL);
if (!cs)
return ERR_PTR(-ENOMEM);
- if (alloc_cpumasks(cs, NULL)) {
- kfree(cs);
- return ERR_PTR(-ENOMEM);
- }
-
__set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
fmeter_init(&cs->fmeter);
cs->relax_domain_level = -1;
@@ -3920,7 +3930,7 @@ static void cpuset_handle_hotplug(void)
bool on_dfl = is_in_v2_mode();
struct tmpmasks tmp, *ptmp = NULL;
- if (on_dfl && !alloc_cpumasks(NULL, &tmp))
+ if (on_dfl && !alloc_tmpmasks(&tmp))
ptmp = &tmp;
lockdep_assert_cpus_held();
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH -next v4 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks
2025-08-18 6:41 [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
2025-08-18 6:41 ` [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup Chen Ridong
2025-08-18 6:41 ` [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
@ 2025-08-18 6:41 ` Chen Ridong
2025-08-24 17:07 ` Waiman Long
2025-08-23 2:08 ` [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
3 siblings, 1 reply; 9+ messages in thread
From: Chen Ridong @ 2025-08-18 6:41 UTC (permalink / raw)
To: longman, tj, hannes, mkoutny; +Cc: cgroups, linux-kernel, lujialin4, chenridong
From: Chen Ridong <chenridong@huawei.com>
cpuset: add helpers for cpus_read_lock and cpuset_mutex locks.
Replace repetitive locking patterns with new helpers:
- cpuset_full_lock()
- cpuset_full_unlock()
This makes the code cleaner and ensures consistent lock ordering.
Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
kernel/cgroup/cpuset-internal.h | 2 ++
kernel/cgroup/cpuset-v1.c | 12 +++----
kernel/cgroup/cpuset.c | 60 +++++++++++++++++++--------------
3 files changed, 40 insertions(+), 34 deletions(-)
diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
index 75b3aef39231..337608f408ce 100644
--- a/kernel/cgroup/cpuset-internal.h
+++ b/kernel/cgroup/cpuset-internal.h
@@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on)
ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off);
int cpuset_common_seq_show(struct seq_file *sf, void *v);
+void cpuset_full_lock(void);
+void cpuset_full_unlock(void);
/*
* cpuset-v1.c
diff --git a/kernel/cgroup/cpuset-v1.c b/kernel/cgroup/cpuset-v1.c
index b69a7db67090..12e76774c75b 100644
--- a/kernel/cgroup/cpuset-v1.c
+++ b/kernel/cgroup/cpuset-v1.c
@@ -169,8 +169,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
cpuset_filetype_t type = cft->private;
int retval = -ENODEV;
- cpus_read_lock();
- cpuset_lock();
+ cpuset_full_lock();
if (!is_cpuset_online(cs))
goto out_unlock;
@@ -184,8 +183,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
break;
}
out_unlock:
- cpuset_unlock();
- cpus_read_unlock();
+ cpuset_full_unlock();
return retval;
}
@@ -454,8 +452,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
cpuset_filetype_t type = cft->private;
int retval = 0;
- cpus_read_lock();
- cpuset_lock();
+ cpuset_full_lock();
if (!is_cpuset_online(cs)) {
retval = -ENODEV;
goto out_unlock;
@@ -498,8 +495,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
break;
}
out_unlock:
- cpuset_unlock();
- cpus_read_unlock();
+ cpuset_full_unlock();
return retval;
}
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index d5588a1fef60..d29f90a28e1e 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -250,6 +250,12 @@ static struct cpuset top_cpuset = {
static DEFINE_MUTEX(cpuset_mutex);
+/**
+ * cpuset_lock - Acquire the global cpuset mutex
+ *
+ * This locks the global cpuset mutex to prevent modifications to cpuset
+ * hierarchy and configurations. This helper is not enough to make modification.
+ */
void cpuset_lock(void)
{
mutex_lock(&cpuset_mutex);
@@ -260,6 +266,24 @@ void cpuset_unlock(void)
mutex_unlock(&cpuset_mutex);
}
+/**
+ * cpuset_full_lock - Acquire full protection for cpuset modification
+ *
+ * Takes both CPU hotplug read lock (cpus_read_lock()) and cpuset mutex
+ * to safely modify cpuset data.
+ */
+void cpuset_full_lock(void)
+{
+ cpus_read_lock();
+ mutex_lock(&cpuset_mutex);
+}
+
+void cpuset_full_unlock(void)
+{
+ mutex_unlock(&cpuset_mutex);
+ cpus_read_unlock();
+}
+
static DEFINE_SPINLOCK(callback_lock);
void cpuset_callback_lock_irq(void)
@@ -3233,8 +3257,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
int retval = -ENODEV;
buf = strstrip(buf);
- cpus_read_lock();
- mutex_lock(&cpuset_mutex);
+ cpuset_full_lock();
if (!is_cpuset_online(cs))
goto out_unlock;
@@ -3263,8 +3286,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
if (force_sd_rebuild)
rebuild_sched_domains_locked();
out_unlock:
- mutex_unlock(&cpuset_mutex);
- cpus_read_unlock();
+ cpuset_full_unlock();
flush_workqueue(cpuset_migrate_mm_wq);
return retval ?: nbytes;
}
@@ -3367,12 +3389,10 @@ static ssize_t cpuset_partition_write(struct kernfs_open_file *of, char *buf,
else
return -EINVAL;
- cpus_read_lock();
- mutex_lock(&cpuset_mutex);
+ cpuset_full_lock();
if (is_cpuset_online(cs))
retval = update_prstate(cs, val);
- mutex_unlock(&cpuset_mutex);
- cpus_read_unlock();
+ cpuset_full_unlock();
return retval ?: nbytes;
}
@@ -3497,9 +3517,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
if (!parent)
return 0;
- cpus_read_lock();
- mutex_lock(&cpuset_mutex);
-
+ cpuset_full_lock();
if (is_spread_page(parent))
set_bit(CS_SPREAD_PAGE, &cs->flags);
if (is_spread_slab(parent))
@@ -3551,8 +3569,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
spin_unlock_irq(&callback_lock);
out_unlock:
- mutex_unlock(&cpuset_mutex);
- cpus_read_unlock();
+ cpuset_full_unlock();
return 0;
}
@@ -3567,16 +3584,12 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
{
struct cpuset *cs = css_cs(css);
- cpus_read_lock();
- mutex_lock(&cpuset_mutex);
-
+ cpuset_full_lock();
if (!cpuset_v2() && is_sched_load_balance(cs))
cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0);
cpuset_dec();
-
- mutex_unlock(&cpuset_mutex);
- cpus_read_unlock();
+ cpuset_full_unlock();
}
/*
@@ -3588,16 +3601,11 @@ static void cpuset_css_killed(struct cgroup_subsys_state *css)
{
struct cpuset *cs = css_cs(css);
- cpus_read_lock();
- mutex_lock(&cpuset_mutex);
-
+ cpuset_full_lock();
/* Reset valid partition back to member */
if (is_partition_valid(cs))
update_prstate(cs, PRS_MEMBER);
-
- mutex_unlock(&cpuset_mutex);
- cpus_read_unlock();
-
+ cpuset_full_unlock();
}
static void cpuset_css_free(struct cgroup_subsys_state *css)
--
2.34.1
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH -next v4 0/3] some optimization for cpuset
2025-08-18 6:41 [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
` (2 preceding siblings ...)
2025-08-18 6:41 ` [PATCH -next v4 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
@ 2025-08-23 2:08 ` Chen Ridong
3 siblings, 0 replies; 9+ messages in thread
From: Chen Ridong @ 2025-08-23 2:08 UTC (permalink / raw)
To: longman, tj, hannes, mkoutny; +Cc: cgroups, linux-kernel, lujialin4, chenridong
On 2025/8/18 14:41, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> This patch series contains several cpuset improvements:
>
> 1. Decouple cpuset and tmpmasks allocation/freeing.
> 2. Add cpuset_full_[un]lock helpers.
>
> ---
> v4:
> - update the commit message for patch 3.
>
> v3:
> - fix typos and comment errors.
> - rename cpus_read_cpuset_[un]lock to cpuset_full_[un]lock
>
> v2:
> - dropped guard helper approach, nusing new helper instead.
> - added patches for decoupling cpuset/tmpmasks allocation
>
> Chen Ridong (3):
> cpuset: decouple tmpmasks and cpumasks freeing in cgroup
> cpuset: separate tmpmasks and cpuset allocation logic
> cpuset: add helpers for cpus read and cpuset_mutex locks
>
> kernel/cgroup/cpuset-internal.h | 2 +
> kernel/cgroup/cpuset-v1.c | 12 +-
> kernel/cgroup/cpuset.c | 220 +++++++++++++++++---------------
> 3 files changed, 122 insertions(+), 112 deletions(-)
>
Hi Longman,
Looking forward to your review of this series again. Would greatly appreciate it.
--
Best regards,
Ridong
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic
2025-08-18 6:41 ` [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
@ 2025-08-24 17:05 ` Waiman Long
2025-08-25 0:52 ` Chen Ridong
0 siblings, 1 reply; 9+ messages in thread
From: Waiman Long @ 2025-08-24 17:05 UTC (permalink / raw)
To: Chen Ridong, tj, hannes, mkoutny
Cc: cgroups, linux-kernel, lujialin4, chenridong
On 8/18/25 2:41 AM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> The original alloc_cpumasks() served dual purposes: allocating cpumasks
> for both temporary masks (tmpmasks) and cpuset structures. This patch:
>
> 1. Decouples these allocation paths for better code clarity
> 2. Introduces dedicated alloc_tmpmasks() and dup_or_alloc_cpuset()
> functions
> 3. Maintains symmetric pairing:
> - alloc_tmpmasks() ↔ free_tmpmasks()
> - dup_or_alloc_cpuset() ↔ free_cpuset()
>
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> ---
> kernel/cgroup/cpuset.c | 128 ++++++++++++++++++++++-------------------
> 1 file changed, 69 insertions(+), 59 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index aebda14cc67f..d5588a1fef60 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -411,51 +411,46 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
> }
>
> /**
> - * alloc_cpumasks - allocate three cpumasks for cpuset
> - * @cs: the cpuset that have cpumasks to be allocated.
> - * @tmp: the tmpmasks structure pointer
> - * Return: 0 if successful, -ENOMEM otherwise.
> + * alloc_cpumasks - Allocate an array of cpumask variables
> + * @pmasks: Pointer to array of cpumask_var_t pointers
> + * @size: Number of cpumasks to allocate
> *
> - * Only one of the two input arguments should be non-NULL.
> + * Allocates @size cpumasks and initializes them to empty. Returns 0 on
> + * success, -ENOMEM on allocation failure. On failure, any previously
> + * allocated cpumasks are freed.
The convention for the kernel-doc is to have a "Return:" tag if the
function has a returned value. That "Return:" tag is deleted by this
change. Your description does describe the returned value and no test
robot failure was reported. Other than that, the rest of the patch looks
good to me.
Cheers,
Longman
> */
> -static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
> +static inline int alloc_cpumasks(cpumask_var_t *pmasks[], u32 size)
> {
> - cpumask_var_t *pmask1, *pmask2, *pmask3, *pmask4;
> + int i;
>
> - if (cs) {
> - pmask1 = &cs->cpus_allowed;
> - pmask2 = &cs->effective_cpus;
> - pmask3 = &cs->effective_xcpus;
> - pmask4 = &cs->exclusive_cpus;
> - } else {
> - pmask1 = &tmp->new_cpus;
> - pmask2 = &tmp->addmask;
> - pmask3 = &tmp->delmask;
> - pmask4 = NULL;
> + for (i = 0; i < size; i++) {
> + if (!zalloc_cpumask_var(pmasks[i], GFP_KERNEL)) {
> + while (--i >= 0)
> + free_cpumask_var(*pmasks[i]);
> + return -ENOMEM;
> + }
> }
> -
> - if (!zalloc_cpumask_var(pmask1, GFP_KERNEL))
> - return -ENOMEM;
> -
> - if (!zalloc_cpumask_var(pmask2, GFP_KERNEL))
> - goto free_one;
> -
> - if (!zalloc_cpumask_var(pmask3, GFP_KERNEL))
> - goto free_two;
> -
> - if (pmask4 && !zalloc_cpumask_var(pmask4, GFP_KERNEL))
> - goto free_three;
> -
> -
> return 0;
> +}
>
> -free_three:
> - free_cpumask_var(*pmask3);
> -free_two:
> - free_cpumask_var(*pmask2);
> -free_one:
> - free_cpumask_var(*pmask1);
> - return -ENOMEM;
> +/**
> + * alloc_tmpmasks - Allocate temporary cpumasks for cpuset operations.
> + * @tmp: Pointer to tmpmasks structure to populate
> + * Return: 0 on success, -ENOMEM on allocation failure
> + */
> +static inline int alloc_tmpmasks(struct tmpmasks *tmp)
> +{
> + /*
> + * Array of pointers to the three cpumask_var_t fields in tmpmasks.
> + * Note: Array size must match actual number of masks (3)
> + */
> + cpumask_var_t *pmask[3] = {
> + &tmp->new_cpus,
> + &tmp->addmask,
> + &tmp->delmask
> + };
> +
> + return alloc_cpumasks(pmask, ARRAY_SIZE(pmask));
> }
>
> /**
> @@ -470,26 +465,46 @@ static inline void free_tmpmasks(struct tmpmasks *tmp)
> }
>
> /**
> - * alloc_trial_cpuset - allocate a trial cpuset
> - * @cs: the cpuset that the trial cpuset duplicates
> + * dup_or_alloc_cpuset - Duplicate or allocate a new cpuset
> + * @cs: Source cpuset to duplicate (NULL for a fresh allocation)
> + *
> + * Creates a new cpuset by either:
> + * 1. Duplicating an existing cpuset (if @cs is non-NULL), or
> + * 2. Allocating a fresh cpuset with zero-initialized masks (if @cs is NULL)
> + *
> + * Return: Pointer to newly allocated cpuset on success, NULL on failure
> */
> -static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
> +static struct cpuset *dup_or_alloc_cpuset(struct cpuset *cs)
> {
> struct cpuset *trial;
>
> - trial = kmemdup(cs, sizeof(*cs), GFP_KERNEL);
> + /* Allocate base structure */
> + trial = cs ? kmemdup(cs, sizeof(*cs), GFP_KERNEL) :
> + kzalloc(sizeof(*cs), GFP_KERNEL);
> if (!trial)
> return NULL;
>
> - if (alloc_cpumasks(trial, NULL)) {
> + /* Setup cpumask pointer array */
> + cpumask_var_t *pmask[4] = {
> + &trial->cpus_allowed,
> + &trial->effective_cpus,
> + &trial->effective_xcpus,
> + &trial->exclusive_cpus
> + };
> +
> + if (alloc_cpumasks(pmask, ARRAY_SIZE(pmask))) {
> kfree(trial);
> return NULL;
> }
>
> - cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
> - cpumask_copy(trial->effective_cpus, cs->effective_cpus);
> - cpumask_copy(trial->effective_xcpus, cs->effective_xcpus);
> - cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
> + /* Copy masks if duplicating */
> + if (cs) {
> + cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
> + cpumask_copy(trial->effective_cpus, cs->effective_cpus);
> + cpumask_copy(trial->effective_xcpus, cs->effective_xcpus);
> + cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
> + }
> +
> return trial;
> }
>
> @@ -2332,7 +2347,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
> if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed))
> return 0;
>
> - if (alloc_cpumasks(NULL, &tmp))
> + if (alloc_tmpmasks(&tmp))
> return -ENOMEM;
>
> if (old_prs) {
> @@ -2476,7 +2491,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
> if (retval)
> return retval;
>
> - if (alloc_cpumasks(NULL, &tmp))
> + if (alloc_tmpmasks(&tmp))
> return -ENOMEM;
>
> if (old_prs) {
> @@ -2820,7 +2835,7 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
> int spread_flag_changed;
> int err;
>
> - trialcs = alloc_trial_cpuset(cs);
> + trialcs = dup_or_alloc_cpuset(cs);
> if (!trialcs)
> return -ENOMEM;
>
> @@ -2881,7 +2896,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> if (new_prs && is_prs_invalid(old_prs))
> old_prs = PRS_MEMBER;
>
> - if (alloc_cpumasks(NULL, &tmpmask))
> + if (alloc_tmpmasks(&tmpmask))
> return -ENOMEM;
>
> err = update_partition_exclusive_flag(cs, new_prs);
> @@ -3223,7 +3238,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> if (!is_cpuset_online(cs))
> goto out_unlock;
>
> - trialcs = alloc_trial_cpuset(cs);
> + trialcs = dup_or_alloc_cpuset(cs);
> if (!trialcs) {
> retval = -ENOMEM;
> goto out_unlock;
> @@ -3456,15 +3471,10 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)
> if (!parent_css)
> return &top_cpuset.css;
>
> - cs = kzalloc(sizeof(*cs), GFP_KERNEL);
> + cs = dup_or_alloc_cpuset(NULL);
> if (!cs)
> return ERR_PTR(-ENOMEM);
>
> - if (alloc_cpumasks(cs, NULL)) {
> - kfree(cs);
> - return ERR_PTR(-ENOMEM);
> - }
> -
> __set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
> fmeter_init(&cs->fmeter);
> cs->relax_domain_level = -1;
> @@ -3920,7 +3930,7 @@ static void cpuset_handle_hotplug(void)
> bool on_dfl = is_in_v2_mode();
> struct tmpmasks tmp, *ptmp = NULL;
>
> - if (on_dfl && !alloc_cpumasks(NULL, &tmp))
> + if (on_dfl && !alloc_tmpmasks(&tmp))
> ptmp = &tmp;
>
> lockdep_assert_cpus_held();
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup
2025-08-18 6:41 ` [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup Chen Ridong
@ 2025-08-24 17:07 ` Waiman Long
0 siblings, 0 replies; 9+ messages in thread
From: Waiman Long @ 2025-08-24 17:07 UTC (permalink / raw)
To: Chen Ridong, tj, hannes, mkoutny
Cc: cgroups, linux-kernel, lujialin4, chenridong
On 8/18/25 2:41 AM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> Currently, free_cpumasks() can free both tmpmasks and cpumasks of a cpuset
> (cs). However, these two operations are not logically coupled. To improve
> code clarity:
> 1. Move cpumask freeing to free_cpuset()
> 2. Rename free_cpumasks() to free_tmpmasks()
>
> This change enforces the single responsibility principle.
>
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> ---
> kernel/cgroup/cpuset.c | 32 +++++++++++++-------------------
> 1 file changed, 13 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 3466ebbf1016..aebda14cc67f 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -459,23 +459,14 @@ static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
> }
>
> /**
> - * free_cpumasks - free cpumasks in a tmpmasks structure
> - * @cs: the cpuset that have cpumasks to be free.
> + * free_tmpmasks - free cpumasks in a tmpmasks structure
> * @tmp: the tmpmasks structure pointer
> */
> -static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
> +static inline void free_tmpmasks(struct tmpmasks *tmp)
> {
> - if (cs) {
> - free_cpumask_var(cs->cpus_allowed);
> - free_cpumask_var(cs->effective_cpus);
> - free_cpumask_var(cs->effective_xcpus);
> - free_cpumask_var(cs->exclusive_cpus);
> - }
> - if (tmp) {
> - free_cpumask_var(tmp->new_cpus);
> - free_cpumask_var(tmp->addmask);
> - free_cpumask_var(tmp->delmask);
> - }
> + free_cpumask_var(tmp->new_cpus);
> + free_cpumask_var(tmp->addmask);
> + free_cpumask_var(tmp->delmask);
> }
>
> /**
> @@ -508,7 +499,10 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
> */
> static inline void free_cpuset(struct cpuset *cs)
> {
> - free_cpumasks(cs, NULL);
> + free_cpumask_var(cs->cpus_allowed);
> + free_cpumask_var(cs->effective_cpus);
> + free_cpumask_var(cs->effective_xcpus);
> + free_cpumask_var(cs->exclusive_cpus);
> kfree(cs);
> }
>
> @@ -2427,7 +2421,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
> if (cs->partition_root_state)
> update_partition_sd_lb(cs, old_prs);
> out_free:
> - free_cpumasks(NULL, &tmp);
> + free_tmpmasks(&tmp);
> return retval;
> }
>
> @@ -2530,7 +2524,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
> if (cs->partition_root_state)
> update_partition_sd_lb(cs, old_prs);
>
> - free_cpumasks(NULL, &tmp);
> + free_tmpmasks(&tmp);
> return 0;
> }
>
> @@ -2983,7 +2977,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
> notify_partition_change(cs, old_prs);
> if (force_sd_rebuild)
> rebuild_sched_domains_locked();
> - free_cpumasks(NULL, &tmpmask);
> + free_tmpmasks(&tmpmask);
> return 0;
> }
>
> @@ -4006,7 +4000,7 @@ static void cpuset_handle_hotplug(void)
> if (force_sd_rebuild)
> rebuild_sched_domains_cpuslocked();
>
> - free_cpumasks(NULL, ptmp);
> + free_tmpmasks(ptmp);
> }
>
> void cpuset_update_active_cpus(void)
Reviewed-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -next v4 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks
2025-08-18 6:41 ` [PATCH -next v4 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
@ 2025-08-24 17:07 ` Waiman Long
0 siblings, 0 replies; 9+ messages in thread
From: Waiman Long @ 2025-08-24 17:07 UTC (permalink / raw)
To: Chen Ridong, tj, hannes, mkoutny
Cc: cgroups, linux-kernel, lujialin4, chenridong
On 8/18/25 2:41 AM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> cpuset: add helpers for cpus_read_lock and cpuset_mutex locks.
>
> Replace repetitive locking patterns with new helpers:
> - cpuset_full_lock()
> - cpuset_full_unlock()
>
> This makes the code cleaner and ensures consistent lock ordering.
>
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> ---
> kernel/cgroup/cpuset-internal.h | 2 ++
> kernel/cgroup/cpuset-v1.c | 12 +++----
> kernel/cgroup/cpuset.c | 60 +++++++++++++++++++--------------
> 3 files changed, 40 insertions(+), 34 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
> index 75b3aef39231..337608f408ce 100644
> --- a/kernel/cgroup/cpuset-internal.h
> +++ b/kernel/cgroup/cpuset-internal.h
> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on)
> ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> char *buf, size_t nbytes, loff_t off);
> int cpuset_common_seq_show(struct seq_file *sf, void *v);
> +void cpuset_full_lock(void);
> +void cpuset_full_unlock(void);
>
> /*
> * cpuset-v1.c
> diff --git a/kernel/cgroup/cpuset-v1.c b/kernel/cgroup/cpuset-v1.c
> index b69a7db67090..12e76774c75b 100644
> --- a/kernel/cgroup/cpuset-v1.c
> +++ b/kernel/cgroup/cpuset-v1.c
> @@ -169,8 +169,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
> cpuset_filetype_t type = cft->private;
> int retval = -ENODEV;
>
> - cpus_read_lock();
> - cpuset_lock();
> + cpuset_full_lock();
> if (!is_cpuset_online(cs))
> goto out_unlock;
>
> @@ -184,8 +183,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
> break;
> }
> out_unlock:
> - cpuset_unlock();
> - cpus_read_unlock();
> + cpuset_full_unlock();
> return retval;
> }
>
> @@ -454,8 +452,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> cpuset_filetype_t type = cft->private;
> int retval = 0;
>
> - cpus_read_lock();
> - cpuset_lock();
> + cpuset_full_lock();
> if (!is_cpuset_online(cs)) {
> retval = -ENODEV;
> goto out_unlock;
> @@ -498,8 +495,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> break;
> }
> out_unlock:
> - cpuset_unlock();
> - cpus_read_unlock();
> + cpuset_full_unlock();
> return retval;
> }
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index d5588a1fef60..d29f90a28e1e 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -250,6 +250,12 @@ static struct cpuset top_cpuset = {
>
> static DEFINE_MUTEX(cpuset_mutex);
>
> +/**
> + * cpuset_lock - Acquire the global cpuset mutex
> + *
> + * This locks the global cpuset mutex to prevent modifications to cpuset
> + * hierarchy and configurations. This helper is not enough to make modification.
> + */
> void cpuset_lock(void)
> {
> mutex_lock(&cpuset_mutex);
> @@ -260,6 +266,24 @@ void cpuset_unlock(void)
> mutex_unlock(&cpuset_mutex);
> }
>
> +/**
> + * cpuset_full_lock - Acquire full protection for cpuset modification
> + *
> + * Takes both CPU hotplug read lock (cpus_read_lock()) and cpuset mutex
> + * to safely modify cpuset data.
> + */
> +void cpuset_full_lock(void)
> +{
> + cpus_read_lock();
> + mutex_lock(&cpuset_mutex);
> +}
> +
> +void cpuset_full_unlock(void)
> +{
> + mutex_unlock(&cpuset_mutex);
> + cpus_read_unlock();
> +}
> +
> static DEFINE_SPINLOCK(callback_lock);
>
> void cpuset_callback_lock_irq(void)
> @@ -3233,8 +3257,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> int retval = -ENODEV;
>
> buf = strstrip(buf);
> - cpus_read_lock();
> - mutex_lock(&cpuset_mutex);
> + cpuset_full_lock();
> if (!is_cpuset_online(cs))
> goto out_unlock;
>
> @@ -3263,8 +3286,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> if (force_sd_rebuild)
> rebuild_sched_domains_locked();
> out_unlock:
> - mutex_unlock(&cpuset_mutex);
> - cpus_read_unlock();
> + cpuset_full_unlock();
> flush_workqueue(cpuset_migrate_mm_wq);
> return retval ?: nbytes;
> }
> @@ -3367,12 +3389,10 @@ static ssize_t cpuset_partition_write(struct kernfs_open_file *of, char *buf,
> else
> return -EINVAL;
>
> - cpus_read_lock();
> - mutex_lock(&cpuset_mutex);
> + cpuset_full_lock();
> if (is_cpuset_online(cs))
> retval = update_prstate(cs, val);
> - mutex_unlock(&cpuset_mutex);
> - cpus_read_unlock();
> + cpuset_full_unlock();
> return retval ?: nbytes;
> }
>
> @@ -3497,9 +3517,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
> if (!parent)
> return 0;
>
> - cpus_read_lock();
> - mutex_lock(&cpuset_mutex);
> -
> + cpuset_full_lock();
> if (is_spread_page(parent))
> set_bit(CS_SPREAD_PAGE, &cs->flags);
> if (is_spread_slab(parent))
> @@ -3551,8 +3569,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
> cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
> spin_unlock_irq(&callback_lock);
> out_unlock:
> - mutex_unlock(&cpuset_mutex);
> - cpus_read_unlock();
> + cpuset_full_unlock();
> return 0;
> }
>
> @@ -3567,16 +3584,12 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
> {
> struct cpuset *cs = css_cs(css);
>
> - cpus_read_lock();
> - mutex_lock(&cpuset_mutex);
> -
> + cpuset_full_lock();
> if (!cpuset_v2() && is_sched_load_balance(cs))
> cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0);
>
> cpuset_dec();
> -
> - mutex_unlock(&cpuset_mutex);
> - cpus_read_unlock();
> + cpuset_full_unlock();
> }
>
> /*
> @@ -3588,16 +3601,11 @@ static void cpuset_css_killed(struct cgroup_subsys_state *css)
> {
> struct cpuset *cs = css_cs(css);
>
> - cpus_read_lock();
> - mutex_lock(&cpuset_mutex);
> -
> + cpuset_full_lock();
> /* Reset valid partition back to member */
> if (is_partition_valid(cs))
> update_prstate(cs, PRS_MEMBER);
> -
> - mutex_unlock(&cpuset_mutex);
> - cpus_read_unlock();
> -
> + cpuset_full_unlock();
> }
>
> static void cpuset_css_free(struct cgroup_subsys_state *css)
Reviewed-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic
2025-08-24 17:05 ` Waiman Long
@ 2025-08-25 0:52 ` Chen Ridong
0 siblings, 0 replies; 9+ messages in thread
From: Chen Ridong @ 2025-08-25 0:52 UTC (permalink / raw)
To: Waiman Long, tj, hannes, mkoutny
Cc: cgroups, linux-kernel, lujialin4, chenridong
On 2025/8/25 1:05, Waiman Long wrote:
>
> On 8/18/25 2:41 AM, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@huawei.com>
>>
>> The original alloc_cpumasks() served dual purposes: allocating cpumasks
>> for both temporary masks (tmpmasks) and cpuset structures. This patch:
>>
>> 1. Decouples these allocation paths for better code clarity
>> 2. Introduces dedicated alloc_tmpmasks() and dup_or_alloc_cpuset()
>> functions
>> 3. Maintains symmetric pairing:
>> - alloc_tmpmasks() ↔ free_tmpmasks()
>> - dup_or_alloc_cpuset() ↔ free_cpuset()
>>
>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>> ---
>> kernel/cgroup/cpuset.c | 128 ++++++++++++++++++++++-------------------
>> 1 file changed, 69 insertions(+), 59 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index aebda14cc67f..d5588a1fef60 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -411,51 +411,46 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
>> }
>> /**
>> - * alloc_cpumasks - allocate three cpumasks for cpuset
>> - * @cs: the cpuset that have cpumasks to be allocated.
>> - * @tmp: the tmpmasks structure pointer
>> - * Return: 0 if successful, -ENOMEM otherwise.
>> + * alloc_cpumasks - Allocate an array of cpumask variables
>> + * @pmasks: Pointer to array of cpumask_var_t pointers
>> + * @size: Number of cpumasks to allocate
>> *
>> - * Only one of the two input arguments should be non-NULL.
>> + * Allocates @size cpumasks and initializes them to empty. Returns 0 on
>> + * success, -ENOMEM on allocation failure. On failure, any previously
>> + * allocated cpumasks are freed.
>
> The convention for the kernel-doc is to have a "Return:" tag if the function has a returned value.
> That "Return:" tag is deleted by this change. Your description does describe the returned value and
> no test robot failure was reported. Other than that, the rest of the patch looks good to me.
>
> Cheers,
> Longman
>
Thank you Longman, will update.
--
Best regards,
Ridong
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-08-25 0:52 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-18 6:41 [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
2025-08-18 6:41 ` [PATCH -next v4 1/3] cpuset: decouple tmpmasks and cpumasks freeing in cgroup Chen Ridong
2025-08-24 17:07 ` Waiman Long
2025-08-18 6:41 ` [PATCH -next v4 2/3] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
2025-08-24 17:05 ` Waiman Long
2025-08-25 0:52 ` Chen Ridong
2025-08-18 6:41 ` [PATCH -next v4 3/3] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
2025-08-24 17:07 ` Waiman Long
2025-08-23 2:08 ` [PATCH -next v4 0/3] some optimization for cpuset Chen Ridong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).