linux-pm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] cpufreq: Fix race between sysfs writes and hotplug/policy update
@ 2013-06-23  3:02 Saravana Kannan
  2013-06-24  6:08 ` Viresh Kumar
  0 siblings, 1 reply; 3+ messages in thread
From: Saravana Kannan @ 2013-06-23  3:02 UTC (permalink / raw)
  To: Rafael J . Wysocki, Viresh Kumar
  Cc: cpufreq, linux-pm, linux-arm-msm, Stephen Boyd

The sysfs store ops need to grab the policy write semaphore to avoid race
with hotplug and cpufreq_update_policy() calls. Without this, we could end
up with simultaneous calls to cpufreq_driver->target()

Signed-off-by: Saravana Kannan <skannan@codeaurora.org>
---
There still seem to be race conditions between cpufreq_update_policy() and
the cpufreq hotplug path. But that seems more complicated to fix. So,
leaving that for later.

 drivers/cpufreq/cpufreq.c           |   10 ++++++++++
 drivers/cpufreq/cpufreq_userspace.c |   11 +----------
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index 2d53f47..37db7f0 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -420,9 +420,13 @@ static ssize_t store_##file_name					\
 	if (ret != 1)							\
 		return -EINVAL;						\
 									\
+	lock_policy_rwsem_write(policy->cpu);				\
+									\
 	ret = __cpufreq_set_policy(policy, &new_policy);		\
 	policy->user_policy.object = policy->object;			\
 									\
+	unlock_policy_rwsem_write(policy->cpu);				\
+									\
 	return ret ? ret : count;					\
 }
 
@@ -480,6 +484,8 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
 						&new_policy.governor))
 		return -EINVAL;
 
+	lock_policy_rwsem_write(policy->cpu);
+
 	/* Do not use cpufreq_set_policy here or the user_policy.max
 	   will be wrongly overridden */
 	ret = __cpufreq_set_policy(policy, &new_policy);
@@ -487,6 +493,8 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
 	policy->user_policy.policy = policy->policy;
 	policy->user_policy.governor = policy->governor;
 
+	unlock_policy_rwsem_write(policy->cpu);
+
 	if (ret)
 		return ret;
 	else
@@ -572,7 +580,9 @@ static ssize_t store_scaling_setspeed(struct cpufreq_policy *policy,
 	if (ret != 1)
 		return -EINVAL;
 
+	lock_policy_rwsem_write(policy->cpu);
 	policy->governor->store_setspeed(policy, freq);
+	unlock_policy_rwsem_write(policy->cpu);
 
 	return count;
 }
diff --git a/drivers/cpufreq/cpufreq_userspace.c b/drivers/cpufreq/cpufreq_userspace.c
index bbeb9c0..b7dd5b8 100644
--- a/drivers/cpufreq/cpufreq_userspace.c
+++ b/drivers/cpufreq/cpufreq_userspace.c
@@ -87,16 +87,7 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
 	if (freq > per_cpu(cpu_max_freq, policy->cpu))
 		freq = per_cpu(cpu_max_freq, policy->cpu);
 
-	/*
-	 * We're safe from concurrent calls to ->target() here
-	 * as we hold the userspace_mutex lock. If we were calling
-	 * cpufreq_driver_target, a deadlock situation might occur:
-	 * A: cpufreq_set (lock userspace_mutex) ->
-	 *      cpufreq_driver_target(lock policy->lock)
-	 * B: cpufreq_set_policy(lock policy->lock) ->
-	 *      __cpufreq_governor ->
-	 *         cpufreq_governor_userspace (lock userspace_mutex)
-	 */
+	/* The cpufreq framework holds the lock before calling this op. */
 	ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L);
 
  err:
-- 
1.7.8.3

The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-06-24 19:19 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-06-23  3:02 [PATCH] cpufreq: Fix race between sysfs writes and hotplug/policy update Saravana Kannan
2013-06-24  6:08 ` Viresh Kumar
2013-06-24 19:19   ` Saravana Kannan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).