From: Zhongqiu Han <zhongqiu.han@oss.qualcomm.com>
To: rafael@kernel.org, viresh.kumar@linaro.org
Cc: venkatesh.pallipadi@intel.com, davej@redhat.com, trenn@suse.de,
linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org,
zhongqiu.han@oss.qualcomm.com
Subject: [PATCH v2 1/2] cpufreq: governor: Fix race between sysfs store and dbs work handler
Date: Thu, 9 Apr 2026 19:14:06 +0800 [thread overview]
Message-ID: <20260409111407.9775-2-zhongqiu.han@oss.qualcomm.com> (raw)
In-Reply-To: <20260409111407.9775-1-zhongqiu.han@oss.qualcomm.com>
gov_update_cpu_data() resets per-CPU prev_cpu_idle and prev_cpu_nice
for every CPU in the governed domain. It is called from sysfs store
callbacks (e.g. ignore_nice_load_store) which run under
attr_set->update_lock, held by the surrounding governor_store().
Concurrently, dbs_work_handler() calls gov->gov_dbs_update() (which
calls dbs_update()) under policy_dbs->update_mutex. dbs_update() both
reads and writes the same prev_cpu_idle / prev_cpu_nice fields. The
potential race path is:
Path A (sysfs write, holds attr_set->update_lock only):
governor_store()
mutex_lock(&attr_set->update_lock)
ignore_nice_load_store()
dbs_data->ignore_nice_load = input
gov_update_cpu_data(dbs_data)
list_for_each_entry(policy_dbs, ...)
for_each_cpu(j, ...)
j_cdbs->prev_cpu_idle = get_cpu_idle_time(...) /* write */
j_cdbs->prev_cpu_nice = kcpustat_field(...) /* write */
mutex_unlock(&attr_set->update_lock)
Path B (work queue, holds policy_dbs->update_mutex only):
dbs_work_handler()
mutex_lock(&policy_dbs->update_mutex)
gov->gov_dbs_update(policy)
dbs_update()
for_each_cpu(j, policy->cpus)
idle_time = cur - j_cdbs->prev_cpu_idle /* read */
j_cdbs->prev_cpu_idle = cur_idle_time /* write */
idle_time += cur_nice - j_cdbs->prev_cpu_nice /* read */
j_cdbs->prev_cpu_nice = cur_nice /* write */
mutex_unlock(&policy_dbs->update_mutex)
Because attr_set->update_lock and policy_dbs->update_mutex are two
completely independent locks, the two paths are not mutually exclusive.
This results in a data race on cpu_dbs_info.prev_cpu_idle and
cpu_dbs_info.prev_cpu_nice.
Fix this by also acquiring policy_dbs->update_mutex in
gov_update_cpu_data() for each policy, so that path A participates in
the mutual exclusion already established by dbs_work_handler(). Also
update the function comment to accurately reflect the two-level locking
contract.
Additionally, cpufreq_dbs_governor_start() initializes prev_cpu_idle
and prev_cpu_nice without holding policy_dbs->update_mutex. After
cpufreq_dbs_governor_init() returns, the new policy is already visible
in attr_set->policy_list and sysfs attributes are accessible. A
concurrent sysfs write can therefore call gov_update_cpu_data() and
race with the initialization loop on the same u64 fields. Fix this by
holding policy_dbs->update_mutex around the initialization loop in
cpufreq_dbs_governor_start() as well.
The root of this race dates back to the original ondemand/conservative
governors. Before commit ee88415caf73 ("[CPUFREQ] Cleanup locking in
conservative governor") and commit 5a75c82828e7 ("[CPUFREQ] Cleanup
locking in ondemand governor"), all accesses to prev_cpu_idle and
prev_cpu_nice in cpufreq_governor_dbs() (path X), store_ignore_nice_load()
(path Y), and do_dbs_timer() (path Z) were serialised by the same
dbs_mutex, so no race existed. Those two commits switched do_dbs_timer()
from dbs_mutex to a per-policy/per-cpu timer_mutex to reduce lock
contention, but left store_ignore_nice_load() still holding dbs_mutex.
As a result, path Y (store) and path Z (do_dbs_timer) no longer shared a
common lock, introducing a potential race on prev_cpu_idle/prev_cpu_nice
between store_ignore_nice_load() and dbs_check_cpu().
Commit 326c86deaed54a ("[CPUFREQ] Remove unneeded locks") then removed
dbs_mutex from store_ignore_nice_load() entirely, introducing an
additional potential race between store_ignore_nice_load() (path Y, now
lockless) and cpufreq_governor_dbs() (path X, still holding dbs_mutex),
while the race between path Y and path Z remained.
Fixes: ee88415caf736b ("[CPUFREQ] Cleanup locking in conservative governor")
Fixes: 5a75c82828e7c0 ("[CPUFREQ] Cleanup locking in ondemand governor")
Fixes: 326c86deaed54a ("[CPUFREQ] Remove unneeded locks")
Signed-off-by: Zhongqiu Han <zhongqiu.han@oss.qualcomm.com>
---
drivers/cpufreq/cpufreq_governor.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 86f35e451914..c0d419c95609 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -90,7 +90,8 @@ EXPORT_SYMBOL_GPL(sampling_rate_store);
* (that may be a single policy or a bunch of them if governor tunables are
* system-wide).
*
- * Call under the @dbs_data mutex.
+ * Call under the @dbs_data->attr_set.update_lock. The per-policy
+ * update_mutex is acquired and released internally for each policy.
*/
void gov_update_cpu_data(struct dbs_data *dbs_data)
{
@@ -99,6 +100,7 @@ void gov_update_cpu_data(struct dbs_data *dbs_data)
list_for_each_entry(policy_dbs, &dbs_data->attr_set.policy_list, list) {
unsigned int j;
+ mutex_lock(&policy_dbs->update_mutex);
for_each_cpu(j, policy_dbs->policy->cpus) {
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
@@ -107,6 +109,7 @@ void gov_update_cpu_data(struct dbs_data *dbs_data)
if (dbs_data->ignore_nice_load)
j_cdbs->prev_cpu_nice = kcpustat_field(&kcpustat_cpu(j), CPUTIME_NICE, j);
}
+ mutex_unlock(&policy_dbs->update_mutex);
}
}
EXPORT_SYMBOL_GPL(gov_update_cpu_data);
@@ -529,6 +532,7 @@ int cpufreq_dbs_governor_start(struct cpufreq_policy *policy)
ignore_nice = dbs_data->ignore_nice_load;
io_busy = dbs_data->io_is_busy;
+ mutex_lock(&policy_dbs->update_mutex);
for_each_cpu(j, policy->cpus) {
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
@@ -541,6 +545,7 @@ int cpufreq_dbs_governor_start(struct cpufreq_policy *policy)
if (ignore_nice)
j_cdbs->prev_cpu_nice = kcpustat_field(&kcpustat_cpu(j), CPUTIME_NICE, j);
}
+ mutex_unlock(&policy_dbs->update_mutex);
gov->start(policy);
--
2.43.0
next prev parent reply other threads:[~2026-04-09 11:14 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 11:14 [PATCH v2 0/2] cpufreq: governor: Fix races and stale baseline on prev_cpu_nice Zhongqiu Han
2026-04-09 11:14 ` Zhongqiu Han [this message]
2026-04-09 11:14 ` [PATCH v2 2/2] cpufreq: governor: Fix stale prev_cpu_nice spike when enabling ignore_nice_load Zhongqiu Han
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260409111407.9775-2-zhongqiu.han@oss.qualcomm.com \
--to=zhongqiu.han@oss.qualcomm.com \
--cc=davej@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=rafael@kernel.org \
--cc=trenn@suse.de \
--cc=venkatesh.pallipadi@intel.com \
--cc=viresh.kumar@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox