From: Waiman Long <llong@redhat.com>
To: Chen Ridong <chenridong@huaweicloud.com>,
tj@kernel.org, hannes@cmpxchg.org, mkoutny@suse.com
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
lujialin4@huawei.com, chenridong@huawei.com
Subject: Re: [PATCH -next RFC -v2 04/11] cpuset: Refactor exclusive CPU mask computation logic
Date: Mon, 15 Sep 2025 14:47:35 -0400 [thread overview]
Message-ID: <eaae38ba-36ed-4d1b-aefb-10b9a1874845@redhat.com> (raw)
In-Reply-To: <20250909033233.2731579-5-chenridong@huaweicloud.com>
On 9/8/25 11:32 PM, Chen Ridong wrote:
> From: Chen Ridong <chenridong@huawei.com>
>
> The current compute_effective_exclusive_cpumask function handles multiple
> scenarios with different input parameters, making the code difficult to
> follow. This patch refactors it into two separate functions:
> compute_excpus and compute_trialcs_excpus.
>
> The compute_excpus function calculates the exclusive CPU mask for a given
> input and excludes exclusive CPUs from sibling cpusets when cs's
> exclusive_cpus is not explicitly set.
>
> The compute_trialcs_excpus function specifically handles exclusive CPU
> computation for trial cpusets used during CPU mask configuration updates,
> and always excludes exclusive CPUs from sibling cpusets.
>
> This refactoring significantly improves code readability and clarity,
> making it explicit which function to call for each use case and what
> parameters should be provided.
>
> Signed-off-by: Chen Ridong <chenridong@huawei.com>
> ---
> kernel/cgroup/cpuset.c | 103 ++++++++++++++++++++++++++---------------
> 1 file changed, 65 insertions(+), 38 deletions(-)
>
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index a31b05f58e0e..6015322a10ac 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -1400,38 +1400,25 @@ bool cpuset_cpu_is_isolated(int cpu)
> }
> EXPORT_SYMBOL_GPL(cpuset_cpu_is_isolated);
>
> -/*
> - * compute_effective_exclusive_cpumask - compute effective exclusive CPUs
> - * @cs: cpuset
> - * @xcpus: effective exclusive CPUs value to be set
> - * @real_cs: the real cpuset (can be NULL)
> - * Return: 0 if there is no sibling conflict, > 0 otherwise
> +/**
> + * rm_siblings_excl_cpus - Remove exclusive CPUs that are used by sibling cpusets
> + * @parent: Parent cpuset containing all siblings
> + * @cs: Current cpuset (will be skipped)
> + * @excpus: exclusive effective CPU mask to modify
> *
> - * If exclusive_cpus isn't explicitly set or a real_cs is provided, we have to
> - * scan the sibling cpusets and exclude their exclusive_cpus or effective_xcpus
> - * as well. The provision of real_cs means that a cpumask is being changed and
> - * the given cs is a trial one.
> + * This function ensures the given @excpus mask doesn't include any CPUs that
> + * are exclusively allocated to sibling cpusets. It walks through all siblings
> + * of @cs under @parent and removes their exclusive CPUs from @excpus.
> */
> -static int compute_effective_exclusive_cpumask(struct cpuset *cs,
> - struct cpumask *xcpus,
> - struct cpuset *real_cs)
> +static int rm_siblings_excl_cpus(struct cpuset *parent, struct cpuset *cs,
> + struct cpumask *excpus)
> {
> struct cgroup_subsys_state *css;
> - struct cpuset *parent = parent_cs(cs);
> struct cpuset *sibling;
> int retval = 0;
>
> - if (!xcpus)
> - xcpus = cs->effective_xcpus;
> -
> - cpumask_and(xcpus, user_xcpus(cs), parent->effective_xcpus);
> -
> - if (!real_cs) {
> - if (!cpumask_empty(cs->exclusive_cpus))
> - return 0;
> - } else {
> - cs = real_cs;
> - }
> + if (cpumask_empty(excpus))
> + return retval;
>
> /*
> * Exclude exclusive CPUs from siblings
> @@ -1441,20 +1428,60 @@ static int compute_effective_exclusive_cpumask(struct cpuset *cs,
> if (sibling == cs)
> continue;
>
> - if (cpumask_intersects(xcpus, sibling->exclusive_cpus)) {
> - cpumask_andnot(xcpus, xcpus, sibling->exclusive_cpus);
> + if (cpumask_intersects(excpus, sibling->exclusive_cpus)) {
> + cpumask_andnot(excpus, excpus, sibling->exclusive_cpus);
> retval++;
> continue;
> }
> - if (cpumask_intersects(xcpus, sibling->effective_xcpus)) {
> - cpumask_andnot(xcpus, xcpus, sibling->effective_xcpus);
> + if (cpumask_intersects(excpus, sibling->effective_xcpus)) {
> + cpumask_andnot(excpus, excpus, sibling->effective_xcpus);
> retval++;
> }
> }
> rcu_read_unlock();
> +
> return retval;
> }
>
> +/*
> + * compute_excpus - compute effective exclusive CPUs
> + * @cs: cpuset
> + * @xcpus: effective exclusive CPUs value to be set
> + * Return: 0 if there is no sibling conflict, > 0 otherwise
> + *
> + * If exclusive_cpus isn't explicitly set , we have to scan the sibling cpusets
> + * and exclude their exclusive_cpus or effective_xcpus as well.
> + */
> +static int compute_excpus(struct cpuset *cs, struct cpumask *excpus)
> +{
> + struct cpuset *parent = parent_cs(cs);
> +
> + cpumask_and(excpus, user_xcpus(cs), parent->effective_xcpus);
> +
> + if (!cpumask_empty(cs->exclusive_cpus))
> + return 0;
> +
> + return rm_siblings_excl_cpus(parent, cs, excpus);
> +}
> +
> +/*
> + * compute_trialcs_excpus - Compute effective exclusive CPUs for a trial cpuset
> + * @trialcs: The trial cpuset containing the proposed new configuration
> + * @cs: The original cpuset that the trial configuration is based on
> + * Return: 0 if successful with no sibling conflict, >0 if a conflict is found
> + *
> + * Computes the effective_xcpus for a trial configuration. @cs is provided to represent
> + * the real cs.
> + */
> +static int compute_trialcs_excpus(struct cpuset *trialcs, struct cpuset *cs)
> +{
> + struct cpuset *parent = parent_cs(trialcs);
> + struct cpumask *excpus = trialcs->effective_xcpus;
> +
> + cpumask_and(excpus, user_xcpus(trialcs), parent->effective_xcpus);
> + return rm_siblings_excl_cpus(parent, cs, excpus);
> +}
> +
> static inline bool is_remote_partition(struct cpuset *cs)
> {
> return !list_empty(&cs->remote_sibling);
> @@ -1496,7 +1523,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
> * Note that creating a remote partition with any local partition root
> * above it or remote partition root underneath it is not allowed.
> */
> - compute_effective_exclusive_cpumask(cs, tmp->new_cpus, NULL);
> + compute_excpus(cs, tmp->new_cpus);
> WARN_ON_ONCE(cpumask_intersects(tmp->new_cpus, subpartitions_cpus));
> if (!cpumask_intersects(tmp->new_cpus, cpu_active_mask) ||
> cpumask_subset(top_cpuset.effective_cpus, tmp->new_cpus))
> @@ -1545,7 +1572,7 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
> cs->partition_root_state = PRS_MEMBER;
>
> /* effective_xcpus may need to be changed */
> - compute_effective_exclusive_cpumask(cs, NULL, NULL);
> + compute_excpus(cs, cs->effective_xcpus);
> reset_partition_data(cs);
> spin_unlock_irq(&callback_lock);
> update_unbound_workqueue_cpumask(isolcpus_updated);
> @@ -1746,12 +1773,12 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
>
> if ((cmd == partcmd_enable) || (cmd == partcmd_enablei)) {
> /*
> - * Need to call compute_effective_exclusive_cpumask() in case
> + * Need to call compute_excpus() in case
> * exclusive_cpus not set. Sibling conflict should only happen
> * if exclusive_cpus isn't set.
> */
> xcpus = tmp->delmask;
> - if (compute_effective_exclusive_cpumask(cs, xcpus, NULL))
> + if (compute_excpus(cs, xcpus))
> WARN_ON_ONCE(!cpumask_empty(cs->exclusive_cpus));
>
> /*
> @@ -2033,7 +2060,7 @@ static void compute_partition_effective_cpumask(struct cpuset *cs,
> * 2) All the effective_cpus will be used up and cp
> * has tasks
> */
> - compute_effective_exclusive_cpumask(cs, new_ecpus, NULL);
> + compute_excpus(cs, new_ecpus);
> cpumask_and(new_ecpus, new_ecpus, cpu_active_mask);
>
> rcu_read_lock();
> @@ -2112,7 +2139,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
> * its value is being processed.
> */
> if (remote && (cp != cs)) {
> - compute_effective_exclusive_cpumask(cp, tmp->new_cpus, NULL);
> + compute_excpus(cp, tmp->new_cpus);
> if (cpumask_equal(cp->effective_xcpus, tmp->new_cpus)) {
> pos_css = css_rightmost_descendant(pos_css);
> continue;
> @@ -2214,7 +2241,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
> cpumask_copy(cp->effective_cpus, tmp->new_cpus);
> cp->partition_root_state = new_prs;
> if (!cpumask_empty(cp->exclusive_cpus) && (cp != cs))
> - compute_effective_exclusive_cpumask(cp, NULL, NULL);
> + compute_excpus(cp, cp->effective_xcpus);
>
> /*
> * Make sure effective_xcpus is properly set for a valid
> @@ -2363,7 +2390,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
> * for checking validity of the partition root.
> */
> if (!cpumask_empty(trialcs->exclusive_cpus) || is_partition_valid(cs))
> - compute_effective_exclusive_cpumask(trialcs, NULL, cs);
> + compute_trialcs_excpus(trialcs, cs);
> }
>
> /* Nothing to do if the cpus didn't change */
> @@ -2499,7 +2526,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
> * Reject the change if there is exclusive CPUs conflict with
> * the siblings.
> */
> - if (compute_effective_exclusive_cpumask(trialcs, NULL, cs))
> + if (compute_trialcs_excpus(trialcs, cs))
> return -EINVAL;
> }
>
Reviewed-by: Waiman Long <longman@redhat.com>
next prev parent reply other threads:[~2025-09-15 18:47 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-09 3:32 [PATCH -next RFC -v2 00/11] Refactor cpus mask setting Chen Ridong
2025-09-09 3:32 ` [PATCH -next RFC -v2 01/11] cpuset: move the root cpuset write check earlier Chen Ridong
2025-09-15 18:43 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 02/11] cpuset: remove unused assignment to trialcs->partition_root_state Chen Ridong
2025-09-15 18:44 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 03/11] cpuset: change return type of is_partition_[in]valid to bool Chen Ridong
2025-09-15 18:44 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 04/11] cpuset: Refactor exclusive CPU mask computation logic Chen Ridong
2025-09-15 18:47 ` Waiman Long [this message]
2025-09-09 3:32 ` [PATCH -next RFC -v2 05/11] cpuset: refactor CPU mask buffer parsing logic Chen Ridong
2025-09-15 18:49 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 06/11] cpuset: introduce cpus_excl_conflict and mems_excl_conflict helpers Chen Ridong
2025-09-15 18:42 ` Waiman Long
2025-09-16 7:59 ` Chen Ridong
2025-09-09 3:32 ` [PATCH -next RFC -v2 07/11] cpuset: refactor out validate_partition Chen Ridong
2025-09-15 18:53 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 08/11] cpuset: refactor cpus_allowed_validate_change Chen Ridong
2025-09-15 19:05 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 09/11] cpuset: refactor partition_cpus_change Chen Ridong
2025-09-15 19:34 ` Waiman Long
2025-09-16 8:01 ` Chen Ridong
2025-09-09 3:32 ` [PATCH -next RFC -v2 10/11] cpuset: use parse_cpulist for setting cpus.exclusive Chen Ridong
2025-09-15 19:39 ` Waiman Long
2025-09-09 3:32 ` [PATCH -next RFC -v2 11/11] cpuset: use partition_cpus_change for setting exclusive cpus Chen Ridong
2025-09-15 20:05 ` Waiman Long
2025-09-16 8:02 ` Chen Ridong
2025-09-15 11:18 ` [PATCH -next RFC -v2 00/11] Refactor cpus mask setting Chen Ridong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=eaae38ba-36ed-4d1b-aefb-10b9a1874845@redhat.com \
--to=llong@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huawei.com \
--cc=chenridong@huaweicloud.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lujialin4@huawei.com \
--cc=mkoutny@suse.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox