cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [-next v2 0/4] some optimization for cpuset
@ 2025-08-13  8:29 Chen Ridong
  2025-08-13  8:29 ` [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag Chen Ridong
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Chen Ridong @ 2025-08-13  8:29 UTC (permalink / raw)
  To: tj, hannes, mkoutny, longman
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

From: Chen Ridong <chenridong@huawei.com>

This patch series contains several cpuset improvements:

1. Remove redundant CS_ONLINE flag.
2. Decouple cpuset and tmpmasks allocation/freeing.
3. Add cpus_read_cpuset_[un]lock() helpers.

---
v2:
 - dropped guard helper approach, nusing new helper instead.
 - added patches for decoupling cpuset/tmpmasks allocation

Chen Ridong (4):
  cpuset: remove redundant CS_ONLINE flag
  cpuset: decouple tmpmaks and cpumaks of cs free
  cpuset: separate tmpmasks and cpuset allocation logic
  cpuset: add helpers for cpus read and cpuset_mutex locks

 include/linux/cgroup.h          |   5 +
 kernel/cgroup/cpuset-internal.h |   5 +-
 kernel/cgroup/cpuset-v1.c       |  12 +-
 kernel/cgroup/cpuset.c          | 212 ++++++++++++++++----------------
 4 files changed, 117 insertions(+), 117 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag
  2025-08-13  8:29 [-next v2 0/4] some optimization for cpuset Chen Ridong
@ 2025-08-13  8:29 ` Chen Ridong
  2025-08-13 18:15   ` Tejun Heo
  2025-08-13  8:29 ` [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free Chen Ridong
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-13  8:29 UTC (permalink / raw)
  To: tj, hannes, mkoutny, longman
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

From: Chen Ridong <chenridong@huawei.com>

The CS_ONLINE flag was introduced prior to the CSS_ONLINE flag in the
cpuset subsystem. Currently, the flag setting sequence is as follows:

1. cpuset_css_online() sets CS_ONLINE
2. css->flags gets CSS_ONLINE set
...
3. cgroup->kill_css sets CSS_DYING
4. cpuset_css_offline() clears CS_ONLINE
5. css->flags clears CSS_ONLINE

The is_cpuset_online() check currently occurs between steps 1 and 3.
However, it would be equally safe to perform this check between steps 2
and 3, as CSS_ONLINE provides the same synchronization guarantee as
CS_ONLINE.

Since CS_ONLINE is redundant with CSS_ONLINE and provides no additional
synchronization benefits, we can safely remove it to simplify the code.

Signed-off-by: Chen Ridong <chenridong@huawei.com>
Acked-by: Waiman Long <longman@redhat.com>
---
 include/linux/cgroup.h          | 5 +++++
 kernel/cgroup/cpuset-internal.h | 3 +--
 kernel/cgroup/cpuset.c          | 4 +---
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index b18fb5fcb38e..ae73dbb19165 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -354,6 +354,11 @@ static inline bool css_is_dying(struct cgroup_subsys_state *css)
 	return css->flags & CSS_DYING;
 }
 
+static inline bool css_is_online(struct cgroup_subsys_state *css)
+{
+	return css->flags & CSS_ONLINE;
+}
+
 static inline bool css_is_self(struct cgroup_subsys_state *css)
 {
 	if (css == &css->cgroup->self) {
diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
index 383963e28ac6..75b3aef39231 100644
--- a/kernel/cgroup/cpuset-internal.h
+++ b/kernel/cgroup/cpuset-internal.h
@@ -38,7 +38,6 @@ enum prs_errcode {
 
 /* bits in struct cpuset flags field */
 typedef enum {
-	CS_ONLINE,
 	CS_CPU_EXCLUSIVE,
 	CS_MEM_EXCLUSIVE,
 	CS_MEM_HARDWALL,
@@ -202,7 +201,7 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
 /* convenient tests for these bits */
 static inline bool is_cpuset_online(struct cpuset *cs)
 {
-	return test_bit(CS_ONLINE, &cs->flags) && !css_is_dying(&cs->css);
+	return css_is_online(&cs->css) && !css_is_dying(&cs->css);
 }
 
 static inline int is_cpu_exclusive(const struct cpuset *cs)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 27adb04df675..3466ebbf1016 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -207,7 +207,7 @@ static inline void notify_partition_change(struct cpuset *cs, int old_prs)
  * parallel, we may leave an offline CPU in cpu_allowed or some other masks.
  */
 static struct cpuset top_cpuset = {
-	.flags = BIT(CS_ONLINE) | BIT(CS_CPU_EXCLUSIVE) |
+	.flags = BIT(CS_CPU_EXCLUSIVE) |
 		 BIT(CS_MEM_EXCLUSIVE) | BIT(CS_SCHED_LOAD_BALANCE),
 	.partition_root_state = PRS_ROOT,
 	.relax_domain_level = -1,
@@ -3496,7 +3496,6 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 	cpus_read_lock();
 	mutex_lock(&cpuset_mutex);
 
-	set_bit(CS_ONLINE, &cs->flags);
 	if (is_spread_page(parent))
 		set_bit(CS_SPREAD_PAGE, &cs->flags);
 	if (is_spread_slab(parent))
@@ -3571,7 +3570,6 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
 		cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0);
 
 	cpuset_dec();
-	clear_bit(CS_ONLINE, &cs->flags);
 
 	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free
  2025-08-13  8:29 [-next v2 0/4] some optimization for cpuset Chen Ridong
  2025-08-13  8:29 ` [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag Chen Ridong
@ 2025-08-13  8:29 ` Chen Ridong
  2025-08-13 19:50   ` Waiman Long
  2025-08-13  8:29 ` [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
  2025-08-13  8:29 ` [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
  3 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-13  8:29 UTC (permalink / raw)
  To: tj, hannes, mkoutny, longman
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

From: Chen Ridong <chenridong@huawei.com>

Currently, free_cpumasks can free tmpmasks of cpumask of cs. However, it
doesn't have couple these 2 options. To make the function more clearer,
move the freeing of cpumask in cs to the free_cpuset. And rename the
free_cpumasks to the free_tmpmasks. which is Single responsibility.

Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/cgroup/cpuset.c | 32 +++++++++++++-------------------
 1 file changed, 13 insertions(+), 19 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3466ebbf1016..aebda14cc67f 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -459,23 +459,14 @@ static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
 }
 
 /**
- * free_cpumasks - free cpumasks in a tmpmasks structure
- * @cs:  the cpuset that have cpumasks to be free.
+ * free_tmpmasks - free cpumasks in a tmpmasks structure
  * @tmp: the tmpmasks structure pointer
  */
-static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
+static inline void free_tmpmasks(struct tmpmasks *tmp)
 {
-	if (cs) {
-		free_cpumask_var(cs->cpus_allowed);
-		free_cpumask_var(cs->effective_cpus);
-		free_cpumask_var(cs->effective_xcpus);
-		free_cpumask_var(cs->exclusive_cpus);
-	}
-	if (tmp) {
-		free_cpumask_var(tmp->new_cpus);
-		free_cpumask_var(tmp->addmask);
-		free_cpumask_var(tmp->delmask);
-	}
+	free_cpumask_var(tmp->new_cpus);
+	free_cpumask_var(tmp->addmask);
+	free_cpumask_var(tmp->delmask);
 }
 
 /**
@@ -508,7 +499,10 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
  */
 static inline void free_cpuset(struct cpuset *cs)
 {
-	free_cpumasks(cs, NULL);
+	free_cpumask_var(cs->cpus_allowed);
+	free_cpumask_var(cs->effective_cpus);
+	free_cpumask_var(cs->effective_xcpus);
+	free_cpumask_var(cs->exclusive_cpus);
 	kfree(cs);
 }
 
@@ -2427,7 +2421,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
 	if (cs->partition_root_state)
 		update_partition_sd_lb(cs, old_prs);
 out_free:
-	free_cpumasks(NULL, &tmp);
+	free_tmpmasks(&tmp);
 	return retval;
 }
 
@@ -2530,7 +2524,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
 	if (cs->partition_root_state)
 		update_partition_sd_lb(cs, old_prs);
 
-	free_cpumasks(NULL, &tmp);
+	free_tmpmasks(&tmp);
 	return 0;
 }
 
@@ -2983,7 +2977,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
 	notify_partition_change(cs, old_prs);
 	if (force_sd_rebuild)
 		rebuild_sched_domains_locked();
-	free_cpumasks(NULL, &tmpmask);
+	free_tmpmasks(&tmpmask);
 	return 0;
 }
 
@@ -4006,7 +4000,7 @@ static void cpuset_handle_hotplug(void)
 	if (force_sd_rebuild)
 		rebuild_sched_domains_cpuslocked();
 
-	free_cpumasks(NULL, ptmp);
+	free_tmpmasks(ptmp);
 }
 
 void cpuset_update_active_cpus(void)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
  2025-08-13  8:29 [-next v2 0/4] some optimization for cpuset Chen Ridong
  2025-08-13  8:29 ` [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag Chen Ridong
  2025-08-13  8:29 ` [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free Chen Ridong
@ 2025-08-13  8:29 ` Chen Ridong
  2025-08-13 21:28   ` kernel test robot
  2025-08-13  8:29 ` [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
  3 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-13  8:29 UTC (permalink / raw)
  To: tj, hannes, mkoutny, longman
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

From: Chen Ridong <chenridong@huawei.com>

The original alloc_cpumasks() served dual purposes: allocating cpumasks
for both temporary masks (tmpmasks) and cpuset structures. This patch:

1. Decouples these allocation paths for better code clarity
2. Introduces dedicated alloc_tmpmasks() and dup_or_alloc_cpuset()
   functions
3. Maintains symmetric pairing:
   - alloc_tmpmasks() ↔ free_tmpmasks()
   - dup_or_alloc_cpuset() ↔ free_cpuset()

Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/cgroup/cpuset.c | 128 ++++++++++++++++++++++-------------------
 1 file changed, 69 insertions(+), 59 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index aebda14cc67f..3c5e44f824d1 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -411,51 +411,46 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 }
 
 /**
- * alloc_cpumasks - allocate three cpumasks for cpuset
- * @cs:  the cpuset that have cpumasks to be allocated.
- * @tmp: the tmpmasks structure pointer
- * Return: 0 if successful, -ENOMEM otherwise.
+ * alloc_cpumasks - Allocate an array of cpumask variables
+ * @cpumasks: Pointer to array of cpumask_var_t pointers
+ * @size: Number of cpumasks to allocate
  *
- * Only one of the two input arguments should be non-NULL.
+ * Allocates @size cpumasks and initializes them to empty. Returns 0 on
+ * success, -ENOMEM on allocation failure. On failure, any previously
+ * allocated cpumasks are freed.
  */
-static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
+static inline int alloc_cpumasks(cpumask_var_t *pmasks[], u32 size)
 {
-	cpumask_var_t *pmask1, *pmask2, *pmask3, *pmask4;
+	int i;
 
-	if (cs) {
-		pmask1 = &cs->cpus_allowed;
-		pmask2 = &cs->effective_cpus;
-		pmask3 = &cs->effective_xcpus;
-		pmask4 = &cs->exclusive_cpus;
-	} else {
-		pmask1 = &tmp->new_cpus;
-		pmask2 = &tmp->addmask;
-		pmask3 = &tmp->delmask;
-		pmask4 = NULL;
+	for (i = 0; i < size; i++) {
+		if (!zalloc_cpumask_var(pmasks[i], GFP_KERNEL)) {
+			while (--i >= 0)
+				free_cpumask_var(*pmasks[i]);
+			return -ENOMEM;
+		}
 	}
-
-	if (!zalloc_cpumask_var(pmask1, GFP_KERNEL))
-		return -ENOMEM;
-
-	if (!zalloc_cpumask_var(pmask2, GFP_KERNEL))
-		goto free_one;
-
-	if (!zalloc_cpumask_var(pmask3, GFP_KERNEL))
-		goto free_two;
-
-	if (pmask4 && !zalloc_cpumask_var(pmask4, GFP_KERNEL))
-		goto free_three;
-
-
 	return 0;
+}
 
-free_three:
-	free_cpumask_var(*pmask3);
-free_two:
-	free_cpumask_var(*pmask2);
-free_one:
-	free_cpumask_var(*pmask1);
-	return -ENOMEM;
+/**
+ * alloc_tmpmasks - Allocate temporary cpumasks for cpuset operations.
+ * @tmp: Pointer to tmpmasks structure to populate
+ * Return: 0 on success, -ENOMEM on allocation failure
+ */
+static inline int alloc_tmpmasks(struct tmpmasks *tmp)
+{
+	/*
+	 * Array of pointers to the three cpumask_var_t fields in tmpmasks.
+	 * Note: Array size must match actual number of masks (3)
+	 */
+	cpumask_var_t *pmask[3] = {
+		&tmp->new_cpus,
+		&tmp->addmask,
+		&tmp->delmask
+	};
+
+	return alloc_cpumasks(pmask, ARRAY_SIZE(pmask));
 }
 
 /**
@@ -470,26 +465,46 @@ static inline void free_tmpmasks(struct tmpmasks *tmp)
 }
 
 /**
- * alloc_trial_cpuset - allocate a trial cpuset
- * @cs: the cpuset that the trial cpuset duplicates
+ * dup_or_alloc_cpuset - Duplicate or allocate a new cpuset
+ * @cs: Source cpuset to duplicate (NULL for a fresh allocation)
+ *
+ * Creates a new cpuset by either:
+ * 1. Duplicating an existing cpuset (if @cs is non-NULL), or
+ * 2. Allocating a fresh cpuset with zero-initialized masks (if @cs is NULL)
+ *
+ * Return: Pointer to newly allocated cpuset on success, NULL on failure
  */
-static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
+static struct cpuset *dup_or_alloc_cpuset(struct cpuset *cs)
 {
 	struct cpuset *trial;
 
-	trial = kmemdup(cs, sizeof(*cs), GFP_KERNEL);
+	/* Allocate base structure */
+	trial = cs ? kmemdup(cs, sizeof(*cs), GFP_KERNEL) :
+		     kzalloc(sizeof(*cs), GFP_KERNEL);
 	if (!trial)
 		return NULL;
 
-	if (alloc_cpumasks(trial, NULL)) {
+	/* Setup cpumask pointer array */
+	cpumask_var_t *pmask[4] = {
+		&trial->cpus_allowed,
+		&trial->effective_cpus,
+		&trial->effective_xcpus,
+		&trial->exclusive_cpus
+	};
+
+	if (alloc_cpumasks(pmask, ARRAY_SIZE(pmask))) {
 		kfree(trial);
 		return NULL;
 	}
 
-	cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
-	cpumask_copy(trial->effective_cpus, cs->effective_cpus);
-	cpumask_copy(trial->effective_xcpus, cs->effective_xcpus);
-	cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
+	/* Copy masks if duplicating */
+	if (cs) {
+		cpumask_copy(trial->cpus_allowed, cs->cpus_allowed);
+		cpumask_copy(trial->effective_cpus, cs->effective_cpus);
+		cpumask_copy(trial->effective_xcpus, cs->effective_xcpus);
+		cpumask_copy(trial->exclusive_cpus, cs->exclusive_cpus);
+	}
+
 	return trial;
 }
 
@@ -2332,7 +2347,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
 	if (cpumask_equal(cs->cpus_allowed, trialcs->cpus_allowed))
 		return 0;
 
-	if (alloc_cpumasks(NULL, &tmp))
+	if (alloc_tmpmasks(&tmp))
 		return -ENOMEM;
 
 	if (old_prs) {
@@ -2476,7 +2491,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
 	if (retval)
 		return retval;
 
-	if (alloc_cpumasks(NULL, &tmp))
+	if (alloc_tmpmasks(&tmp))
 		return -ENOMEM;
 
 	if (old_prs) {
@@ -2820,7 +2835,7 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
 	int spread_flag_changed;
 	int err;
 
-	trialcs = alloc_trial_cpuset(cs);
+	trialcs = dup_or_alloc_cpuset(cs);
 	if (!trialcs)
 		return -ENOMEM;
 
@@ -2881,7 +2896,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
 	if (new_prs && is_prs_invalid(old_prs))
 		old_prs = PRS_MEMBER;
 
-	if (alloc_cpumasks(NULL, &tmpmask))
+	if (alloc_tmpmasks(&tmpmask))
 		return -ENOMEM;
 
 	err = update_partition_exclusive_flag(cs, new_prs);
@@ -3223,7 +3238,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
-	trialcs = alloc_trial_cpuset(cs);
+	trialcs = dup_or_alloc_cpuset(cs);
 	if (!trialcs) {
 		retval = -ENOMEM;
 		goto out_unlock;
@@ -3456,15 +3471,10 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)
 	if (!parent_css)
 		return &top_cpuset.css;
 
-	cs = kzalloc(sizeof(*cs), GFP_KERNEL);
+	cs = dup_or_alloc_cpuset(NULL);
 	if (!cs)
 		return ERR_PTR(-ENOMEM);
 
-	if (alloc_cpumasks(cs, NULL)) {
-		kfree(cs);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	__set_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
 	fmeter_init(&cs->fmeter);
 	cs->relax_domain_level = -1;
@@ -3920,7 +3930,7 @@ static void cpuset_handle_hotplug(void)
 	bool on_dfl = is_in_v2_mode();
 	struct tmpmasks tmp, *ptmp = NULL;
 
-	if (on_dfl && !alloc_cpumasks(NULL, &tmp))
+	if (on_dfl && !alloc_tmpmasks(&tmp))
 		ptmp = &tmp;
 
 	lockdep_assert_cpus_held();
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-13  8:29 [-next v2 0/4] some optimization for cpuset Chen Ridong
                   ` (2 preceding siblings ...)
  2025-08-13  8:29 ` [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
@ 2025-08-13  8:29 ` Chen Ridong
  2025-08-13 20:09   ` Waiman Long
  3 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-13  8:29 UTC (permalink / raw)
  To: tj, hannes, mkoutny, longman
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

From: Chen Ridong <chenridong@huawei.com>

cpuset: add helpers for cpus_read_lock and cpuset_mutex

Replace repetitive locking patterns with new helpers:
- cpus_read_cpuset_lock()
- cpus_read_cpuset_unlock()

This makes the code cleaner and ensures consistent lock ordering.

Signed-off-by: Chen Ridong <chenridong@huawei.com>
---
 kernel/cgroup/cpuset-internal.h |  2 ++
 kernel/cgroup/cpuset-v1.c       | 12 +++------
 kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
 3 files changed, 28 insertions(+), 34 deletions(-)

diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
index 75b3aef39231..6fb00c96044d 100644
--- a/kernel/cgroup/cpuset-internal.h
+++ b/kernel/cgroup/cpuset-internal.h
@@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on)
 ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 				    char *buf, size_t nbytes, loff_t off);
 int cpuset_common_seq_show(struct seq_file *sf, void *v);
+void cpus_read_cpuset_lock(void);
+void cpus_read_cpuset_unlock(void);
 
 /*
  * cpuset-v1.c
diff --git a/kernel/cgroup/cpuset-v1.c b/kernel/cgroup/cpuset-v1.c
index b69a7db67090..f3d2d116c842 100644
--- a/kernel/cgroup/cpuset-v1.c
+++ b/kernel/cgroup/cpuset-v1.c
@@ -169,8 +169,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
 	cpuset_filetype_t type = cft->private;
 	int retval = -ENODEV;
 
-	cpus_read_lock();
-	cpuset_lock();
+	cpus_read_cpuset_lock();
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -184,8 +183,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
 		break;
 	}
 out_unlock:
-	cpuset_unlock();
-	cpus_read_unlock();
+	cpus_read_cpuset_unlock();
 	return retval;
 }
 
@@ -454,8 +452,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
 	cpuset_filetype_t type = cft->private;
 	int retval = 0;
 
-	cpus_read_lock();
-	cpuset_lock();
+	cpus_read_cpuset_lock();
 	if (!is_cpuset_online(cs)) {
 		retval = -ENODEV;
 		goto out_unlock;
@@ -498,8 +495,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
 		break;
 	}
 out_unlock:
-	cpuset_unlock();
-	cpus_read_unlock();
+	cpus_read_cpuset_unlock();
 	return retval;
 }
 
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 3c5e44f824d1..9c0e8f297aaf 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -260,6 +260,18 @@ void cpuset_unlock(void)
 	mutex_unlock(&cpuset_mutex);
 }
 
+void cpus_read_cpuset_lock(void)
+{
+	cpus_read_lock();
+	mutex_lock(&cpuset_mutex);
+}
+
+void cpus_read_cpuset_unlock(void)
+{
+	mutex_unlock(&cpuset_mutex);
+	cpus_read_unlock();
+}
+
 static DEFINE_SPINLOCK(callback_lock);
 
 void cpuset_callback_lock_irq(void)
@@ -3233,8 +3245,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	int retval = -ENODEV;
 
 	buf = strstrip(buf);
-	cpus_read_lock();
-	mutex_lock(&cpuset_mutex);
+	cpus_read_cpuset_lock();
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -3263,8 +3274,7 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	if (force_sd_rebuild)
 		rebuild_sched_domains_locked();
 out_unlock:
-	mutex_unlock(&cpuset_mutex);
-	cpus_read_unlock();
+	cpus_read_cpuset_unlock();
 	flush_workqueue(cpuset_migrate_mm_wq);
 	return retval ?: nbytes;
 }
@@ -3367,12 +3377,10 @@ static ssize_t cpuset_partition_write(struct kernfs_open_file *of, char *buf,
 	else
 		return -EINVAL;
 
-	cpus_read_lock();
-	mutex_lock(&cpuset_mutex);
+	cpus_read_cpuset_lock();
 	if (is_cpuset_online(cs))
 		retval = update_prstate(cs, val);
-	mutex_unlock(&cpuset_mutex);
-	cpus_read_unlock();
+	cpus_read_cpuset_unlock();
 	return retval ?: nbytes;
 }
 
@@ -3497,9 +3505,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 	if (!parent)
 		return 0;
 
-	cpus_read_lock();
-	mutex_lock(&cpuset_mutex);
-
+	cpus_read_cpuset_lock();
 	if (is_spread_page(parent))
 		set_bit(CS_SPREAD_PAGE, &cs->flags);
 	if (is_spread_slab(parent))
@@ -3551,8 +3557,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 	cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
 	spin_unlock_irq(&callback_lock);
 out_unlock:
-	mutex_unlock(&cpuset_mutex);
-	cpus_read_unlock();
+	cpus_read_cpuset_unlock();
 	return 0;
 }
 
@@ -3567,16 +3572,12 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
 {
 	struct cpuset *cs = css_cs(css);
 
-	cpus_read_lock();
-	mutex_lock(&cpuset_mutex);
-
+	cpus_read_cpuset_lock();
 	if (!cpuset_v2() && is_sched_load_balance(cs))
 		cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0);
 
 	cpuset_dec();
-
-	mutex_unlock(&cpuset_mutex);
-	cpus_read_unlock();
+	cpus_read_cpuset_unlock();
 }
 
 /*
@@ -3588,16 +3589,11 @@ static void cpuset_css_killed(struct cgroup_subsys_state *css)
 {
 	struct cpuset *cs = css_cs(css);
 
-	cpus_read_lock();
-	mutex_lock(&cpuset_mutex);
-
+	cpus_read_cpuset_lock();
 	/* Reset valid partition back to member */
 	if (is_partition_valid(cs))
 		update_prstate(cs, PRS_MEMBER);
-
-	mutex_unlock(&cpuset_mutex);
-	cpus_read_unlock();
-
+	cpus_read_cpuset_unlock();
 }
 
 static void cpuset_css_free(struct cgroup_subsys_state *css)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag
  2025-08-13  8:29 ` [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag Chen Ridong
@ 2025-08-13 18:15   ` Tejun Heo
  0 siblings, 0 replies; 19+ messages in thread
From: Tejun Heo @ 2025-08-13 18:15 UTC (permalink / raw)
  To: Chen Ridong
  Cc: hannes, mkoutny, longman, cgroups, linux-kernel, lujialin4,
	chenridong, christophe.jaillet

On Wed, Aug 13, 2025 at 08:29:01AM +0000, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
> 
> The CS_ONLINE flag was introduced prior to the CSS_ONLINE flag in the
> cpuset subsystem. Currently, the flag setting sequence is as follows:
> 
> 1. cpuset_css_online() sets CS_ONLINE
> 2. css->flags gets CSS_ONLINE set
> ...
> 3. cgroup->kill_css sets CSS_DYING
> 4. cpuset_css_offline() clears CS_ONLINE
> 5. css->flags clears CSS_ONLINE
> 
> The is_cpuset_online() check currently occurs between steps 1 and 3.
> However, it would be equally safe to perform this check between steps 2
> and 3, as CSS_ONLINE provides the same synchronization guarantee as
> CS_ONLINE.
> 
> Since CS_ONLINE is redundant with CSS_ONLINE and provides no additional
> synchronization benefits, we can safely remove it to simplify the code.
> 
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> Acked-by: Waiman Long <longman@redhat.com>

Applied to cgroup/for-6.18.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free
  2025-08-13  8:29 ` [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free Chen Ridong
@ 2025-08-13 19:50   ` Waiman Long
  2025-08-14  0:38     ` Chen Ridong
  0 siblings, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-08-13 19:50 UTC (permalink / raw)
  To: Chen Ridong, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet


On 8/13/25 4:29 AM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> Currently, free_cpumasks can free tmpmasks of cpumask of cs. However, it
> doesn't have couple these 2 options. To make the function more clearer,
> move the freeing of cpumask in cs to the free_cpuset. And rename the
> free_cpumasks to the free_tmpmasks. which is Single responsibility.
>
> Signed-off-by: Chen Ridong <chenridong@huawei.com>

Other than typos in the patch title, the code change looks good to me.

Cheers,
Longman

> ---
>   kernel/cgroup/cpuset.c | 32 +++++++++++++-------------------
>   1 file changed, 13 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 3466ebbf1016..aebda14cc67f 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -459,23 +459,14 @@ static inline int alloc_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
>   }
>   
>   /**
> - * free_cpumasks - free cpumasks in a tmpmasks structure
> - * @cs:  the cpuset that have cpumasks to be free.
> + * free_tmpmasks - free cpumasks in a tmpmasks structure
>    * @tmp: the tmpmasks structure pointer
>    */
> -static inline void free_cpumasks(struct cpuset *cs, struct tmpmasks *tmp)
> +static inline void free_tmpmasks(struct tmpmasks *tmp)
>   {
> -	if (cs) {
> -		free_cpumask_var(cs->cpus_allowed);
> -		free_cpumask_var(cs->effective_cpus);
> -		free_cpumask_var(cs->effective_xcpus);
> -		free_cpumask_var(cs->exclusive_cpus);
> -	}
> -	if (tmp) {
> -		free_cpumask_var(tmp->new_cpus);
> -		free_cpumask_var(tmp->addmask);
> -		free_cpumask_var(tmp->delmask);
> -	}
> +	free_cpumask_var(tmp->new_cpus);
> +	free_cpumask_var(tmp->addmask);
> +	free_cpumask_var(tmp->delmask);
>   }
>   
>   /**
> @@ -508,7 +499,10 @@ static struct cpuset *alloc_trial_cpuset(struct cpuset *cs)
>    */
>   static inline void free_cpuset(struct cpuset *cs)
>   {
> -	free_cpumasks(cs, NULL);
> +	free_cpumask_var(cs->cpus_allowed);
> +	free_cpumask_var(cs->effective_cpus);
> +	free_cpumask_var(cs->effective_xcpus);
> +	free_cpumask_var(cs->exclusive_cpus);
>   	kfree(cs);
>   }
>   
> @@ -2427,7 +2421,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
>   	if (cs->partition_root_state)
>   		update_partition_sd_lb(cs, old_prs);
>   out_free:
> -	free_cpumasks(NULL, &tmp);
> +	free_tmpmasks(&tmp);
>   	return retval;
>   }
>   
> @@ -2530,7 +2524,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
>   	if (cs->partition_root_state)
>   		update_partition_sd_lb(cs, old_prs);
>   
> -	free_cpumasks(NULL, &tmp);
> +	free_tmpmasks(&tmp);
>   	return 0;
>   }
>   
> @@ -2983,7 +2977,7 @@ static int update_prstate(struct cpuset *cs, int new_prs)
>   	notify_partition_change(cs, old_prs);
>   	if (force_sd_rebuild)
>   		rebuild_sched_domains_locked();
> -	free_cpumasks(NULL, &tmpmask);
> +	free_tmpmasks(&tmpmask);
>   	return 0;
>   }
>   
> @@ -4006,7 +4000,7 @@ static void cpuset_handle_hotplug(void)
>   	if (force_sd_rebuild)
>   		rebuild_sched_domains_cpuslocked();
>   
> -	free_cpumasks(NULL, ptmp);
> +	free_tmpmasks(ptmp);
>   }
>   
>   void cpuset_update_active_cpus(void)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-13  8:29 ` [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
@ 2025-08-13 20:09   ` Waiman Long
  2025-08-14  0:44     ` Chen Ridong
  0 siblings, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-08-13 20:09 UTC (permalink / raw)
  To: Chen Ridong, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

On 8/13/25 4:29 AM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>
> Replace repetitive locking patterns with new helpers:
> - cpus_read_cpuset_lock()
> - cpus_read_cpuset_unlock()
>
> This makes the code cleaner and ensures consistent lock ordering.
>
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> ---
>   kernel/cgroup/cpuset-internal.h |  2 ++
>   kernel/cgroup/cpuset-v1.c       | 12 +++------
>   kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
>   3 files changed, 28 insertions(+), 34 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
> index 75b3aef39231..6fb00c96044d 100644
> --- a/kernel/cgroup/cpuset-internal.h
> +++ b/kernel/cgroup/cpuset-internal.h
> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on)
>   ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>   				    char *buf, size_t nbytes, loff_t off);
>   int cpuset_common_seq_show(struct seq_file *sf, void *v);
> +void cpus_read_cpuset_lock(void);
> +void cpus_read_cpuset_unlock(void);

The names are not intuitive. I would prefer just extend the 
cpuset_lock/unlock to include cpus_read_lock/unlock and we use 
cpuset_lock/unlock consistently in the cpuset code. Also, there is now 
no external user of cpuset_lock/unlock, we may as well remove them from 
include/linux/cpuset.h.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
  2025-08-13  8:29 ` [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
@ 2025-08-13 21:28   ` kernel test robot
  2025-08-15  0:44     ` Chen Ridong
  0 siblings, 1 reply; 19+ messages in thread
From: kernel test robot @ 2025-08-13 21:28 UTC (permalink / raw)
  To: Chen Ridong, tj, hannes, mkoutny, longman
  Cc: oe-kbuild-all, cgroups, linux-kernel, lujialin4, chenridong,
	christophe.jaillet

Hi Chen,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tj-cgroup/for-next]
[also build test WARNING on linus/master v6.17-rc1 next-20250813]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Chen-Ridong/cpuset-remove-redundant-CS_ONLINE-flag/20250813-164651
base:   https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next
patch link:    https://lore.kernel.org/r/20250813082904.1091651-4-chenridong%40huaweicloud.com
patch subject: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
config: x86_64-defconfig (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202508140524.S2O4D57k-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> Warning: kernel/cgroup/cpuset.c:422 function parameter 'pmasks' not described in 'alloc_cpumasks'

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free
  2025-08-13 19:50   ` Waiman Long
@ 2025-08-14  0:38     ` Chen Ridong
  0 siblings, 0 replies; 19+ messages in thread
From: Chen Ridong @ 2025-08-14  0:38 UTC (permalink / raw)
  To: Waiman Long, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet



On 2025/8/14 3:50, Waiman Long wrote:
> On 8/13/25 4:29 AM, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@huawei.com>
>>
>> Currently, free_cpumasks can free tmpmasks of cpumask of cs. However, it
>> doesn't have couple these 2 options. To make the function more clearer,
>> move the freeing of cpumask in cs to the free_cpuset. And rename the
>> free_cpumasks to the free_tmpmasks. which is Single responsibility.
>>
>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> 
> Other than typos in the patch title, the code change looks good to me.
> 
> Cheers,
> Longman

Thank you for your feedback.
I apologize for the typos—I will correct them promptly.

Best regards,
Ridong


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-13 20:09   ` Waiman Long
@ 2025-08-14  0:44     ` Chen Ridong
  2025-08-14  3:13       ` Waiman Long
  0 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-14  0:44 UTC (permalink / raw)
  To: Waiman Long, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet



On 2025/8/14 4:09, Waiman Long wrote:
> On 8/13/25 4:29 AM, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@huawei.com>
>>
>> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>>
>> Replace repetitive locking patterns with new helpers:
>> - cpus_read_cpuset_lock()
>> - cpus_read_cpuset_unlock()
>>
>> This makes the code cleaner and ensures consistent lock ordering.
>>
>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>> ---
>>   kernel/cgroup/cpuset-internal.h |  2 ++
>>   kernel/cgroup/cpuset-v1.c       | 12 +++------
>>   kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
>>   3 files changed, 28 insertions(+), 34 deletions(-)
>>
>> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
>> index 75b3aef39231..6fb00c96044d 100644
>> --- a/kernel/cgroup/cpuset-internal.h
>> +++ b/kernel/cgroup/cpuset-internal.h
>> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on)
>>   ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>                       char *buf, size_t nbytes, loff_t off);
>>   int cpuset_common_seq_show(struct seq_file *sf, void *v);
>> +void cpus_read_cpuset_lock(void);
>> +void cpus_read_cpuset_unlock(void);
> 
> The names are not intuitive. I would prefer just extend the cpuset_lock/unlock to include
> cpus_read_lock/unlock and we use cpuset_lock/unlock consistently in the cpuset code. Also, there is
> now no external user of cpuset_lock/unlock, we may as well remove them from include/linux/cpuset.h.
> 
> Cheers,
> Longman

I like the idea and have considered it.
However, I noticed that cpuset_locked is being used in __sched_setscheduler.

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-14  0:44     ` Chen Ridong
@ 2025-08-14  3:13       ` Waiman Long
  2025-08-14  3:27         ` Waiman Long
  0 siblings, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-08-14  3:13 UTC (permalink / raw)
  To: Chen Ridong, Waiman Long, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

On 8/13/25 8:44 PM, Chen Ridong wrote:
>
> On 2025/8/14 4:09, Waiman Long wrote:
>> On 8/13/25 4:29 AM, Chen Ridong wrote:
>>> From: Chen Ridong <chenridong@huawei.com>
>>>
>>> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>>>
>>> Replace repetitive locking patterns with new helpers:
>>> - cpus_read_cpuset_lock()
>>> - cpus_read_cpuset_unlock()
>>>
>>> This makes the code cleaner and ensures consistent lock ordering.
>>>
>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>> ---
>>>    kernel/cgroup/cpuset-internal.h |  2 ++
>>>    kernel/cgroup/cpuset-v1.c       | 12 +++------
>>>    kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
>>>    3 files changed, 28 insertions(+), 34 deletions(-)
>>>
>>> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
>>> index 75b3aef39231..6fb00c96044d 100644
>>> --- a/kernel/cgroup/cpuset-internal.h
>>> +++ b/kernel/cgroup/cpuset-internal.h
>>> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int turning_on)
>>>    ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>>                        char *buf, size_t nbytes, loff_t off);
>>>    int cpuset_common_seq_show(struct seq_file *sf, void *v);
>>> +void cpus_read_cpuset_lock(void);
>>> +void cpus_read_cpuset_unlock(void);
>> The names are not intuitive. I would prefer just extend the cpuset_lock/unlock to include
>> cpus_read_lock/unlock and we use cpuset_lock/unlock consistently in the cpuset code. Also, there is
>> now no external user of cpuset_lock/unlock, we may as well remove them from include/linux/cpuset.h.
>>
>> Cheers,
>> Longman
> I like the idea and have considered it.
> However, I noticed that cpuset_locked is being used in __sched_setscheduler.

Right, I overloooked the cpuset_lock() call in kernel/sched/syscall.c. 
So we can't remove it from include/linux/cpuset.h.

This call is invoked to ensure cpusets information is stable. However, 
it doesn't hurt if the cpus_read_lock() is also acquired as a result. 
Alternatively, we can use a name like cpuset_full_lock() to include 
cpus_read_lock().

Cheers,
Longman


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-14  3:13       ` Waiman Long
@ 2025-08-14  3:27         ` Waiman Long
  2025-08-14  3:58           ` Chen Ridong
  0 siblings, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-08-14  3:27 UTC (permalink / raw)
  To: Waiman Long, Chen Ridong, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

On 8/13/25 11:13 PM, Waiman Long wrote:
> On 8/13/25 8:44 PM, Chen Ridong wrote:
>>
>> On 2025/8/14 4:09, Waiman Long wrote:
>>> On 8/13/25 4:29 AM, Chen Ridong wrote:
>>>> From: Chen Ridong <chenridong@huawei.com>
>>>>
>>>> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>>>>
>>>> Replace repetitive locking patterns with new helpers:
>>>> - cpus_read_cpuset_lock()
>>>> - cpus_read_cpuset_unlock()
>>>>
>>>> This makes the code cleaner and ensures consistent lock ordering.
>>>>
>>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>>> ---
>>>>    kernel/cgroup/cpuset-internal.h |  2 ++
>>>>    kernel/cgroup/cpuset-v1.c       | 12 +++------
>>>>    kernel/cgroup/cpuset.c          | 48 
>>>> +++++++++++++++------------------
>>>>    3 files changed, 28 insertions(+), 34 deletions(-)
>>>>
>>>> diff --git a/kernel/cgroup/cpuset-internal.h 
>>>> b/kernel/cgroup/cpuset-internal.h
>>>> index 75b3aef39231..6fb00c96044d 100644
>>>> --- a/kernel/cgroup/cpuset-internal.h
>>>> +++ b/kernel/cgroup/cpuset-internal.h
>>>> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, 
>>>> struct cpuset *cs, int turning_on)
>>>>    ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>>>                        char *buf, size_t nbytes, loff_t off);
>>>>    int cpuset_common_seq_show(struct seq_file *sf, void *v);
>>>> +void cpus_read_cpuset_lock(void);
>>>> +void cpus_read_cpuset_unlock(void);
>>> The names are not intuitive. I would prefer just extend the 
>>> cpuset_lock/unlock to include
>>> cpus_read_lock/unlock and we use cpuset_lock/unlock consistently in 
>>> the cpuset code. Also, there is
>>> now no external user of cpuset_lock/unlock, we may as well remove 
>>> them from include/linux/cpuset.h.
>>>
>>> Cheers,
>>> Longman
>> I like the idea and have considered it.
>> However, I noticed that cpuset_locked is being used in 
>> __sched_setscheduler.
>
> Right, I overloooked the cpuset_lock() call in kernel/sched/syscall.c. 
> So we can't remove it from include/linux/cpuset.h.
>
> This call is invoked to ensure cpusets information is stable. However, 
> it doesn't hurt if the cpus_read_lock() is also acquired as a result. 
> Alternatively, we can use a name like cpuset_full_lock() to include 
> cpus_read_lock().

I have a correction. According to commit d74b27d63a8b ("cgroup/cpuset: 
Change cpuset_rwsem and hotplug lock order") , sched_scheduler() can be 
called while holding cpus_hotplug_lock. So we should keep cpuset_lock() 
as it is.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-14  3:27         ` Waiman Long
@ 2025-08-14  3:58           ` Chen Ridong
  2025-08-15 19:13             ` Waiman Long
  0 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-14  3:58 UTC (permalink / raw)
  To: Waiman Long, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet



On 2025/8/14 11:27, Waiman Long wrote:
> On 8/13/25 11:13 PM, Waiman Long wrote:
>> On 8/13/25 8:44 PM, Chen Ridong wrote:
>>>
>>> On 2025/8/14 4:09, Waiman Long wrote:
>>>> On 8/13/25 4:29 AM, Chen Ridong wrote:
>>>>> From: Chen Ridong <chenridong@huawei.com>
>>>>>
>>>>> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>>>>>
>>>>> Replace repetitive locking patterns with new helpers:
>>>>> - cpus_read_cpuset_lock()
>>>>> - cpus_read_cpuset_unlock()
>>>>>
>>>>> This makes the code cleaner and ensures consistent lock ordering.
>>>>>
>>>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>>>> ---
>>>>>    kernel/cgroup/cpuset-internal.h |  2 ++
>>>>>    kernel/cgroup/cpuset-v1.c       | 12 +++------
>>>>>    kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
>>>>>    3 files changed, 28 insertions(+), 34 deletions(-)
>>>>>
>>>>> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
>>>>> index 75b3aef39231..6fb00c96044d 100644
>>>>> --- a/kernel/cgroup/cpuset-internal.h
>>>>> +++ b/kernel/cgroup/cpuset-internal.h
>>>>> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int
>>>>> turning_on)
>>>>>    ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>>>>                        char *buf, size_t nbytes, loff_t off);
>>>>>    int cpuset_common_seq_show(struct seq_file *sf, void *v);
>>>>> +void cpus_read_cpuset_lock(void);
>>>>> +void cpus_read_cpuset_unlock(void);
>>>> The names are not intuitive. I would prefer just extend the cpuset_lock/unlock to include
>>>> cpus_read_lock/unlock and we use cpuset_lock/unlock consistently in the cpuset code. Also, there is
>>>> now no external user of cpuset_lock/unlock, we may as well remove them from include/linux/cpuset.h.
>>>>
>>>> Cheers,
>>>> Longman
>>> I like the idea and have considered it.
>>> However, I noticed that cpuset_locked is being used in __sched_setscheduler.
>>
>> Right, I overloooked the cpuset_lock() call in kernel/sched/syscall.c. So we can't remove it from
>> include/linux/cpuset.h.
>>
>> This call is invoked to ensure cpusets information is stable. However, it doesn't hurt if the
>> cpus_read_lock() is also acquired as a result. Alternatively, we can use a name like
>> cpuset_full_lock() to include cpus_read_lock().
> 
> I have a correction. According to commit d74b27d63a8b ("cgroup/cpuset: Change cpuset_rwsem and
> hotplug lock order") , sched_scheduler() can be called while holding cpus_hotplug_lock. So we should
> keep cpuset_lock() as it is.
> 
> Cheers,
> Longman

Thank you Longman, this is very helpful.

I had considered whether we can add cpus_read_lock() to the cpuset_lock, but based on your
explanation, I now understand this approach would not work.

For clarity, would it be acceptable to rename:
cpus_read_cpuset_lock() -> cpuset_full_lock()
cpus_read_cpuset_unlock() -> cpuset_full_unlock()

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
  2025-08-13 21:28   ` kernel test robot
@ 2025-08-15  0:44     ` Chen Ridong
  2025-08-15 19:15       ` Waiman Long
  0 siblings, 1 reply; 19+ messages in thread
From: Chen Ridong @ 2025-08-15  0:44 UTC (permalink / raw)
  To: kernel test robot, tj, hannes, mkoutny, longman
  Cc: oe-kbuild-all, cgroups, linux-kernel, lujialin4, chenridong,
	christophe.jaillet



On 2025/8/14 5:28, kernel test robot wrote:
> Hi Chen,
> 
> kernel test robot noticed the following build warnings:
> 
> [auto build test WARNING on tj-cgroup/for-next]
> [also build test WARNING on linus/master v6.17-rc1 next-20250813]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> 
> url:    https://github.com/intel-lab-lkp/linux/commits/Chen-Ridong/cpuset-remove-redundant-CS_ONLINE-flag/20250813-164651
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next
> patch link:    https://lore.kernel.org/r/20250813082904.1091651-4-chenridong%40huaweicloud.com
> patch subject: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
> config: x86_64-defconfig (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/config)
> compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/reproduce)
> 
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202508140524.S2O4D57k-lkp@intel.com/
> 
> All warnings (new ones prefixed by >>):
> 
>>> Warning: kernel/cgroup/cpuset.c:422 function parameter 'pmasks' not described in 'alloc_cpumasks'
> 

Hi all,

Thank you for the warning about the comment issue - I will fix it in the next version.

I will be appreciate if you can review this patch. If you have any feedback, I can update the entire
series together.

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-14  3:58           ` Chen Ridong
@ 2025-08-15 19:13             ` Waiman Long
  2025-08-16  0:23               ` Chen Ridong
  0 siblings, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-08-15 19:13 UTC (permalink / raw)
  To: Chen Ridong, Waiman Long, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet

On 8/13/25 11:58 PM, Chen Ridong wrote:
>
> On 2025/8/14 11:27, Waiman Long wrote:
>> On 8/13/25 11:13 PM, Waiman Long wrote:
>>> On 8/13/25 8:44 PM, Chen Ridong wrote:
>>>> On 2025/8/14 4:09, Waiman Long wrote:
>>>>> On 8/13/25 4:29 AM, Chen Ridong wrote:
>>>>>> From: Chen Ridong <chenridong@huawei.com>
>>>>>>
>>>>>> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>>>>>>
>>>>>> Replace repetitive locking patterns with new helpers:
>>>>>> - cpus_read_cpuset_lock()
>>>>>> - cpus_read_cpuset_unlock()
>>>>>>
>>>>>> This makes the code cleaner and ensures consistent lock ordering.
>>>>>>
>>>>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>>>>> ---
>>>>>>     kernel/cgroup/cpuset-internal.h |  2 ++
>>>>>>     kernel/cgroup/cpuset-v1.c       | 12 +++------
>>>>>>     kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
>>>>>>     3 files changed, 28 insertions(+), 34 deletions(-)
>>>>>>
>>>>>> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
>>>>>> index 75b3aef39231..6fb00c96044d 100644
>>>>>> --- a/kernel/cgroup/cpuset-internal.h
>>>>>> +++ b/kernel/cgroup/cpuset-internal.h
>>>>>> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int
>>>>>> turning_on)
>>>>>>     ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>>>>>                         char *buf, size_t nbytes, loff_t off);
>>>>>>     int cpuset_common_seq_show(struct seq_file *sf, void *v);
>>>>>> +void cpus_read_cpuset_lock(void);
>>>>>> +void cpus_read_cpuset_unlock(void);
>>>>> The names are not intuitive. I would prefer just extend the cpuset_lock/unlock to include
>>>>> cpus_read_lock/unlock and we use cpuset_lock/unlock consistently in the cpuset code. Also, there is
>>>>> now no external user of cpuset_lock/unlock, we may as well remove them from include/linux/cpuset.h.
>>>>>
>>>>> Cheers,
>>>>> Longman
>>>> I like the idea and have considered it.
>>>> However, I noticed that cpuset_locked is being used in __sched_setscheduler.
>>> Right, I overloooked the cpuset_lock() call in kernel/sched/syscall.c. So we can't remove it from
>>> include/linux/cpuset.h.
>>>
>>> This call is invoked to ensure cpusets information is stable. However, it doesn't hurt if the
>>> cpus_read_lock() is also acquired as a result. Alternatively, we can use a name like
>>> cpuset_full_lock() to include cpus_read_lock().
>> I have a correction. According to commit d74b27d63a8b ("cgroup/cpuset: Change cpuset_rwsem and
>> hotplug lock order") , sched_scheduler() can be called while holding cpus_hotplug_lock. So we should
>> keep cpuset_lock() as it is.
>>
>> Cheers,
>> Longman
> Thank you Longman, this is very helpful.
>
> I had considered whether we can add cpus_read_lock() to the cpuset_lock, but based on your
> explanation, I now understand this approach would not work.
>
> For clarity, would it be acceptable to rename:
> cpus_read_cpuset_lock() -> cpuset_full_lock()
> cpus_read_cpuset_unlock() -> cpuset_full_unlock()

Yes, that is what I want to see. Note that taking both cpus_read_lock() 
and cpuset_mutex are needed to modify cpuset data. Taking just 
cpuset_mutex will prevent other from making changes to the cpuset data, 
but is not enough to make modification.

Cheers,
Longman

>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
  2025-08-15  0:44     ` Chen Ridong
@ 2025-08-15 19:15       ` Waiman Long
  2025-08-16  0:21         ` Chen Ridong
  0 siblings, 1 reply; 19+ messages in thread
From: Waiman Long @ 2025-08-15 19:15 UTC (permalink / raw)
  To: Chen Ridong, kernel test robot, tj, hannes, mkoutny
  Cc: oe-kbuild-all, cgroups, linux-kernel, lujialin4, chenridong,
	christophe.jaillet


On 8/14/25 8:44 PM, Chen Ridong wrote:
>
> On 2025/8/14 5:28, kernel test robot wrote:
>> Hi Chen,
>>
>> kernel test robot noticed the following build warnings:
>>
>> [auto build test WARNING on tj-cgroup/for-next]
>> [also build test WARNING on linus/master v6.17-rc1 next-20250813]
>> [If your patch is applied to the wrong git tree, kindly drop us a note.
>> And when submitting patch, we suggest to use '--base' as documented in
>> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>>
>> url:    https://github.com/intel-lab-lkp/linux/commits/Chen-Ridong/cpuset-remove-redundant-CS_ONLINE-flag/20250813-164651
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next
>> patch link:    https://lore.kernel.org/r/20250813082904.1091651-4-chenridong%40huaweicloud.com
>> patch subject: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
>> config: x86_64-defconfig (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/config)
>> compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
>> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/reproduce)
>>
>> If you fix the issue in a separate patch/commit (i.e. not just a new version of
>> the same patch/commit), kindly add following tags
>> | Reported-by: kernel test robot <lkp@intel.com>
>> | Closes: https://lore.kernel.org/oe-kbuild-all/202508140524.S2O4D57k-lkp@intel.com/
>>
>> All warnings (new ones prefixed by >>):
>>
>>>> Warning: kernel/cgroup/cpuset.c:422 function parameter 'pmasks' not described in 'alloc_cpumasks'
> Hi all,
>
> Thank you for the warning about the comment issue - I will fix it in the next version.
>
> I will be appreciate if you can review this patch. If you have any feedback, I can update the entire
> series together.
>
Sorry for not responding to this patch. It looked reasonable to me. I 
did miss the error in the comment though. Fortunately it was caught by 
the test robot.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
  2025-08-15 19:15       ` Waiman Long
@ 2025-08-16  0:21         ` Chen Ridong
  0 siblings, 0 replies; 19+ messages in thread
From: Chen Ridong @ 2025-08-16  0:21 UTC (permalink / raw)
  To: Waiman Long, kernel test robot, tj, hannes, mkoutny
  Cc: oe-kbuild-all, cgroups, linux-kernel, lujialin4, chenridong,
	christophe.jaillet



On 2025/8/16 3:15, Waiman Long wrote:
> 
> On 8/14/25 8:44 PM, Chen Ridong wrote:
>>
>> On 2025/8/14 5:28, kernel test robot wrote:
>>> Hi Chen,
>>>
>>> kernel test robot noticed the following build warnings:
>>>
>>> [auto build test WARNING on tj-cgroup/for-next]
>>> [also build test WARNING on linus/master v6.17-rc1 next-20250813]
>>> [If your patch is applied to the wrong git tree, kindly drop us a note.
>>> And when submitting patch, we suggest to use '--base' as documented in
>>> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>>>
>>> url:   
>>> https://github.com/intel-lab-lkp/linux/commits/Chen-Ridong/cpuset-remove-redundant-CS_ONLINE-flag/20250813-164651
>>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next
>>> patch link:    https://lore.kernel.org/r/20250813082904.1091651-4-chenridong%40huaweicloud.com
>>> patch subject: [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic
>>> config: x86_64-defconfig
>>> (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/config)
>>> compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
>>> reproduce (this is a W=1 build):
>>> (https://download.01.org/0day-ci/archive/20250814/202508140524.S2O4D57k-lkp@intel.com/reproduce)
>>>
>>> If you fix the issue in a separate patch/commit (i.e. not just a new version of
>>> the same patch/commit), kindly add following tags
>>> | Reported-by: kernel test robot <lkp@intel.com>
>>> | Closes: https://lore.kernel.org/oe-kbuild-all/202508140524.S2O4D57k-lkp@intel.com/
>>>
>>> All warnings (new ones prefixed by >>):
>>>
>>>>> Warning: kernel/cgroup/cpuset.c:422 function parameter 'pmasks' not described in 'alloc_cpumasks'
>> Hi all,
>>
>> Thank you for the warning about the comment issue - I will fix it in the next version.
>>
>> I will be appreciate if you can review this patch. If you have any feedback, I can update the entire
>> series together.
>>
> Sorry for not responding to this patch. It looked reasonable to me. I did miss the error in the
> comment though. Fortunately it was caught by the test robot.
> 
> Cheers,
> Longman

Thank you for your feedback.

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks
  2025-08-15 19:13             ` Waiman Long
@ 2025-08-16  0:23               ` Chen Ridong
  0 siblings, 0 replies; 19+ messages in thread
From: Chen Ridong @ 2025-08-16  0:23 UTC (permalink / raw)
  To: Waiman Long, tj, hannes, mkoutny
  Cc: cgroups, linux-kernel, lujialin4, chenridong, christophe.jaillet



On 2025/8/16 3:13, Waiman Long wrote:
> On 8/13/25 11:58 PM, Chen Ridong wrote:
>>
>> On 2025/8/14 11:27, Waiman Long wrote:
>>> On 8/13/25 11:13 PM, Waiman Long wrote:
>>>> On 8/13/25 8:44 PM, Chen Ridong wrote:
>>>>> On 2025/8/14 4:09, Waiman Long wrote:
>>>>>> On 8/13/25 4:29 AM, Chen Ridong wrote:
>>>>>>> From: Chen Ridong <chenridong@huawei.com>
>>>>>>>
>>>>>>> cpuset: add helpers for cpus_read_lock and cpuset_mutex
>>>>>>>
>>>>>>> Replace repetitive locking patterns with new helpers:
>>>>>>> - cpus_read_cpuset_lock()
>>>>>>> - cpus_read_cpuset_unlock()
>>>>>>>
>>>>>>> This makes the code cleaner and ensures consistent lock ordering.
>>>>>>>
>>>>>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>>>>>> ---
>>>>>>>     kernel/cgroup/cpuset-internal.h |  2 ++
>>>>>>>     kernel/cgroup/cpuset-v1.c       | 12 +++------
>>>>>>>     kernel/cgroup/cpuset.c          | 48 +++++++++++++++------------------
>>>>>>>     3 files changed, 28 insertions(+), 34 deletions(-)
>>>>>>>
>>>>>>> diff --git a/kernel/cgroup/cpuset-internal.h b/kernel/cgroup/cpuset-internal.h
>>>>>>> index 75b3aef39231..6fb00c96044d 100644
>>>>>>> --- a/kernel/cgroup/cpuset-internal.h
>>>>>>> +++ b/kernel/cgroup/cpuset-internal.h
>>>>>>> @@ -276,6 +276,8 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs, int
>>>>>>> turning_on)
>>>>>>>     ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>>>>>>>                         char *buf, size_t nbytes, loff_t off);
>>>>>>>     int cpuset_common_seq_show(struct seq_file *sf, void *v);
>>>>>>> +void cpus_read_cpuset_lock(void);
>>>>>>> +void cpus_read_cpuset_unlock(void);
>>>>>> The names are not intuitive. I would prefer just extend the cpuset_lock/unlock to include
>>>>>> cpus_read_lock/unlock and we use cpuset_lock/unlock consistently in the cpuset code. Also,
>>>>>> there is
>>>>>> now no external user of cpuset_lock/unlock, we may as well remove them from
>>>>>> include/linux/cpuset.h.
>>>>>>
>>>>>> Cheers,
>>>>>> Longman
>>>>> I like the idea and have considered it.
>>>>> However, I noticed that cpuset_locked is being used in __sched_setscheduler.
>>>> Right, I overloooked the cpuset_lock() call in kernel/sched/syscall.c. So we can't remove it from
>>>> include/linux/cpuset.h.
>>>>
>>>> This call is invoked to ensure cpusets information is stable. However, it doesn't hurt if the
>>>> cpus_read_lock() is also acquired as a result. Alternatively, we can use a name like
>>>> cpuset_full_lock() to include cpus_read_lock().
>>> I have a correction. According to commit d74b27d63a8b ("cgroup/cpuset: Change cpuset_rwsem and
>>> hotplug lock order") , sched_scheduler() can be called while holding cpus_hotplug_lock. So we should
>>> keep cpuset_lock() as it is.
>>>
>>> Cheers,
>>> Longman
>> Thank you Longman, this is very helpful.
>>
>> I had considered whether we can add cpus_read_lock() to the cpuset_lock, but based on your
>> explanation, I now understand this approach would not work.
>>
>> For clarity, would it be acceptable to rename:
>> cpus_read_cpuset_lock() -> cpuset_full_lock()
>> cpus_read_cpuset_unlock() -> cpuset_full_unlock()
> 
> Yes, that is what I want to see. Note that taking both cpus_read_lock() and cpuset_mutex are needed
> to modify cpuset data. Taking just cpuset_mutex will prevent other from making changes to the cpuset
> data, but is not enough to make modification.
> 
> Cheers,
> Longman
> 
>>

See, I will add the comments to clarify how to use these helpers.

-- 
Best regards,
Ridong


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-08-16  0:23 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-13  8:29 [-next v2 0/4] some optimization for cpuset Chen Ridong
2025-08-13  8:29 ` [-next v2 1/4] cpuset: remove redundant CS_ONLINE flag Chen Ridong
2025-08-13 18:15   ` Tejun Heo
2025-08-13  8:29 ` [-next v2 2/4] cpuset: decouple tmpmaks and cpumaks of cs free Chen Ridong
2025-08-13 19:50   ` Waiman Long
2025-08-14  0:38     ` Chen Ridong
2025-08-13  8:29 ` [-next v2 3/4] cpuset: separate tmpmasks and cpuset allocation logic Chen Ridong
2025-08-13 21:28   ` kernel test robot
2025-08-15  0:44     ` Chen Ridong
2025-08-15 19:15       ` Waiman Long
2025-08-16  0:21         ` Chen Ridong
2025-08-13  8:29 ` [-next v2 4/4] cpuset: add helpers for cpus read and cpuset_mutex locks Chen Ridong
2025-08-13 20:09   ` Waiman Long
2025-08-14  0:44     ` Chen Ridong
2025-08-14  3:13       ` Waiman Long
2025-08-14  3:27         ` Waiman Long
2025-08-14  3:58           ` Chen Ridong
2025-08-15 19:13             ` Waiman Long
2025-08-16  0:23               ` Chen Ridong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).