From: Waiman Long <longman@redhat.com>
To: "Tejun Heo" <tj@kernel.org>, "Zefan Li" <lizefan.x@bytedance.com>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Koutný" <mkoutny@suse.com>,
"Shuah Khan" <shuah@kernel.org>
Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-kselftest@vger.kernel.org,
Chen Ridong <chenridong@huawei.com>,
Waiman Long <longman@redhat.com>
Subject: [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug
Date: Sun, 4 Aug 2024 21:30:17 -0400 [thread overview]
Message-ID: <20240805013019.724300-4-longman@redhat.com> (raw)
In-Reply-To: <20240805013019.724300-1-longman@redhat.com>
It was found that some hotplug operations may cause multiple
rebuild_sched_domains_locked() calls. Some of those intermediate calls
may use cpuset states not in the final correct form leading to incorrect
sched domain setting.
Fix this problem by using the existing force_rebuild flag to inhibit
immediate rebuild_sched_domains_locked() calls if set and only doing
one final call at the end. Also renaming the force_rebuild flag to
force_sd_rebuild to make its meaning for clear.
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 7287cecb27d1..e070e391d7a8 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -231,6 +231,13 @@ static cpumask_var_t isolated_cpus;
/* List of remote partition root children */
static struct list_head remote_children;
+/*
+ * A flag to force sched domain rebuild at the end of an operation while
+ * inhibiting it in the intermediate stages when set. Currently it is only
+ * set in hotplug code.
+ */
+static bool force_sd_rebuild;
+
/*
* Partition root states:
*
@@ -1467,7 +1474,7 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
}
- if (rebuild_domains)
+ if (rebuild_domains && !force_sd_rebuild)
rebuild_sched_domains_locked();
}
@@ -1820,7 +1827,7 @@ static void remote_partition_check(struct cpuset *cs, struct cpumask *newmask,
remote_partition_disable(child, tmp);
disable_cnt++;
}
- if (disable_cnt)
+ if (disable_cnt && !force_sd_rebuild)
rebuild_sched_domains_locked();
}
@@ -2425,7 +2432,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
}
rcu_read_unlock();
- if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD))
+ if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD) &&
+ !force_sd_rebuild)
rebuild_sched_domains_locked();
}
@@ -3087,7 +3095,8 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
cs->flags = trialcs->flags;
spin_unlock_irq(&callback_lock);
- if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
+ if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed &&
+ !force_sd_rebuild)
rebuild_sched_domains_locked();
if (spread_flag_changed)
@@ -4468,11 +4477,9 @@ hotplug_update_tasks(struct cpuset *cs,
update_tasks_nodemask(cs);
}
-static bool force_rebuild;
-
void cpuset_force_rebuild(void)
{
- force_rebuild = true;
+ force_sd_rebuild = true;
}
/**
@@ -4620,15 +4627,9 @@ static void cpuset_handle_hotplug(void)
!cpumask_empty(subpartitions_cpus);
mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
- /*
- * In the rare case that hotplug removes all the cpus in
- * subpartitions_cpus, we assumed that cpus are updated.
- */
- if (!cpus_updated && !cpumask_empty(subpartitions_cpus))
- cpus_updated = true;
-
/* For v1, synchronize cpus_allowed to cpu_active_mask */
if (cpus_updated) {
+ cpuset_force_rebuild();
spin_lock_irq(&callback_lock);
if (!on_dfl)
cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
@@ -4684,8 +4685,8 @@ static void cpuset_handle_hotplug(void)
}
/* rebuild sched domains if cpus_allowed has changed */
- if (cpus_updated || force_rebuild) {
- force_rebuild = false;
+ if (force_sd_rebuild) {
+ force_sd_rebuild = false;
rebuild_sched_domains_cpuslocked();
}
--
2.43.5
next prev parent reply other threads:[~2024-08-05 1:31 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-05 1:30 [PATCH-cgroup 0/5] cgroup/cpuset: Miscellaneous cpuset updates for 6.12 Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 1/5] cgroup/cpuset: fix panic caused by partcmd_update Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 2/5] cgroup/cpuset: Clear effective_xcpus on cpus_allowed clearing only if cpus.exclusive not set Waiman Long
2024-08-05 20:53 ` Tejun Heo
2024-08-05 1:30 ` Waiman Long [this message]
2024-08-05 20:55 ` [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug Tejun Heo
2024-08-05 1:30 ` [PATCH-cgroup 4/5] cgroup/cpuset: Check for partition roots with overlapping CPUs Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 5/5] selftest/cgroup: Add new test cases to test_cpuset_prs.sh Waiman Long
2024-08-05 20:58 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240805013019.724300-4-longman@redhat.com \
--to=longman@redhat.com \
--cc=cgroups@vger.kernel.org \
--cc=chenridong@huawei.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=lizefan.x@bytedance.com \
--cc=mkoutny@suse.com \
--cc=shuah@kernel.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox