* [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling
@ 2025-09-01 8:51 Zhongqiu Han
2025-09-02 6:30 ` kernel test robot
2025-09-02 6:43 ` Ziqi Chen
0 siblings, 2 replies; 4+ messages in thread
From: Zhongqiu Han @ 2025-09-01 8:51 UTC (permalink / raw)
To: alim.akhtar, avri.altman, bvanassche, James.Bottomley,
martin.petersen
Cc: peter.wang, tanghuan, liu.song13, quic_nguyenb, viro, huobean,
adrian.hunter, can.guo, ebiggers, neil.armstrong,
angelogioacchino.delregno, quic_narepall, quic_mnaresh,
linux-scsi, linux-kernel, nitin.rawat, ziqi.chen, zhongqiu.han
The cpu_latency_qos_add/remove/update_request interfaces lack internal
synchronization by design, requiring the caller to ensure thread safety.
The current implementation relies on the `pm_qos_enabled` flag, which is
insufficient to prevent concurrent access and cannot serve as a proper
synchronization mechanism. This has led to data races and list corruption
issues.
A typical race condition call trace is:
[Thread A]
ufshcd_pm_qos_exit()
--> cpu_latency_qos_remove_request()
--> cpu_latency_qos_apply();
--> pm_qos_update_target()
--> plist_del <--(1) delete plist node
--> memset(req, 0, sizeof(*req));
--> hba->pm_qos_enabled = false;
[Thread B]
ufshcd_devfreq_target
--> ufshcd_devfreq_scale
--> ufshcd_scale_clks
--> ufshcd_pm_qos_update <--(2) pm_qos_enabled is true
--> cpu_latency_qos_update_request
--> pm_qos_update_target
--> plist_del <--(3) plist node use-after-free
This patch introduces a dedicated mutex to serialize PM QoS operations,
preventing data races and ensuring safe access to PM QoS resources.
Additionally, READ_ONCE is used in the sysfs interface to ensure atomic
read access to pm_qos_enabled flag.
Fixes: 2777e73fc154 ("scsi: ufs: core: Add CPU latency QoS support for UFS driver")
Signed-off-by: Zhongqiu Han <zhongqiu.han@oss.qualcomm.com>
---
drivers/ufs/core/ufs-sysfs.c | 2 +-
drivers/ufs/core/ufshcd.c | 16 ++++++++++++++++
include/ufs/ufshcd.h | 2 ++
3 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
index 4bd7d491e3c5..8f7975010513 100644
--- a/drivers/ufs/core/ufs-sysfs.c
+++ b/drivers/ufs/core/ufs-sysfs.c
@@ -512,7 +512,7 @@ static ssize_t pm_qos_enable_show(struct device *dev,
{
struct ufs_hba *hba = dev_get_drvdata(dev);
- return sysfs_emit(buf, "%d\n", hba->pm_qos_enabled);
+ return sysfs_emit(buf, "%d\n", READ_ONCE(hba->pm_qos_enabled));
}
/**
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index 926650412eaa..f259fb1790fa 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -1047,14 +1047,18 @@ EXPORT_SYMBOL_GPL(ufshcd_is_hba_active);
*/
void ufshcd_pm_qos_init(struct ufs_hba *hba)
{
+ mutex_lock(&hba->pm_qos_mutex);
if (hba->pm_qos_enabled)
+ mutex_unlock(&hba->pm_qos_mutex);
return;
cpu_latency_qos_add_request(&hba->pm_qos_req, PM_QOS_DEFAULT_VALUE);
if (cpu_latency_qos_request_active(&hba->pm_qos_req))
hba->pm_qos_enabled = true;
+
+ mutex_unlock(&hba->pm_qos_mutex);
}
/**
@@ -1063,11 +1067,15 @@ void ufshcd_pm_qos_init(struct ufs_hba *hba)
*/
void ufshcd_pm_qos_exit(struct ufs_hba *hba)
{
+ mutex_lock(&hba->pm_qos_mutex);
+
if (!hba->pm_qos_enabled)
+ mutex_unlock(&hba->pm_qos_mutex);
return;
cpu_latency_qos_remove_request(&hba->pm_qos_req);
hba->pm_qos_enabled = false;
+ mutex_unlock(&hba->pm_qos_mutex);
}
/**
@@ -1077,10 +1085,14 @@ void ufshcd_pm_qos_exit(struct ufs_hba *hba)
*/
static void ufshcd_pm_qos_update(struct ufs_hba *hba, bool on)
{
+ mutex_lock(&hba->pm_qos_mutex);
+
if (!hba->pm_qos_enabled)
+ mutex_unlock(&hba->pm_qos_mutex);
return;
cpu_latency_qos_update_request(&hba->pm_qos_req, on ? 0 : PM_QOS_DEFAULT_VALUE);
+ mutex_unlock(&hba->pm_qos_mutex);
}
/**
@@ -10764,6 +10776,10 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
mutex_init(&hba->ee_ctrl_mutex);
mutex_init(&hba->wb_mutex);
+
+ /* Initialize mutex for PM QoS request synchronization */
+ mutex_init(&hba->pm_qos_mutex);
+
init_rwsem(&hba->clk_scaling_lock);
ufshcd_init_clk_gating(hba);
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index 30ff169878dc..e81f4346f168 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -962,6 +962,7 @@ enum ufshcd_mcq_opr {
* @ufs_rtc_update_work: A work for UFS RTC periodic update
* @pm_qos_req: PM QoS request handle
* @pm_qos_enabled: flag to check if pm qos is enabled
+ * @pm_qos_mutex: synchronizes PM QoS request and status updates
* @critical_health_count: count of critical health exceptions
* @dev_lvl_exception_count: count of device level exceptions since last reset
* @dev_lvl_exception_id: vendor specific information about the
@@ -1135,6 +1136,7 @@ struct ufs_hba {
struct delayed_work ufs_rtc_update_work;
struct pm_qos_request pm_qos_req;
bool pm_qos_enabled;
+ struct mutex pm_qos_mutex;
int critical_health_count;
atomic_t dev_lvl_exception_count;
--
2.43.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling
2025-09-01 8:51 [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling Zhongqiu Han
@ 2025-09-02 6:30 ` kernel test robot
2025-09-02 6:43 ` Ziqi Chen
1 sibling, 0 replies; 4+ messages in thread
From: kernel test robot @ 2025-09-02 6:30 UTC (permalink / raw)
To: Zhongqiu Han, alim.akhtar, avri.altman, bvanassche,
James.Bottomley, martin.petersen
Cc: oe-kbuild-all, peter.wang, tanghuan, liu.song13, quic_nguyenb,
viro, huobean, adrian.hunter, can.guo, ebiggers, neil.armstrong,
angelogioacchino.delregno, quic_narepall, quic_mnaresh,
linux-scsi, linux-kernel, nitin.rawat, ziqi.chen, zhongqiu.han
Hi Zhongqiu,
kernel test robot noticed the following build warnings:
[auto build test WARNING on jejb-scsi/for-next]
[also build test WARNING on mkp-scsi/for-next linus/master v6.17-rc4 next-20250901]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Zhongqiu-Han/scsi-ufs-core-Fix-data-race-in-CPU-latency-PM-QoS-request-handling/20250901-165540
base: https://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi.git for-next
patch link: https://lore.kernel.org/r/20250901085117.86160-1-zhongqiu.han%40oss.qualcomm.com
patch subject: [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling
config: arc-randconfig-002-20250902 (https://download.01.org/0day-ci/archive/20250902/202509021425.HuVijyYS-lkp@intel.com/config)
compiler: arc-linux-gcc (GCC) 9.5.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250902/202509021425.HuVijyYS-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202509021425.HuVijyYS-lkp@intel.com/
All warnings (new ones prefixed by >>):
drivers/ufs/core/ufshcd.c: In function 'ufshcd_pm_qos_init':
>> drivers/ufs/core/ufshcd.c:1052:2: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
1052 | if (hba->pm_qos_enabled)
| ^~
drivers/ufs/core/ufshcd.c:1054:3: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the 'if'
1054 | return;
| ^~~~~~
drivers/ufs/core/ufshcd.c: In function 'ufshcd_pm_qos_exit':
drivers/ufs/core/ufshcd.c:1072:2: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
1072 | if (!hba->pm_qos_enabled)
| ^~
drivers/ufs/core/ufshcd.c:1074:3: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the 'if'
1074 | return;
| ^~~~~~
drivers/ufs/core/ufshcd.c: In function 'ufshcd_pm_qos_update':
drivers/ufs/core/ufshcd.c:1090:2: warning: this 'if' clause does not guard... [-Wmisleading-indentation]
1090 | if (!hba->pm_qos_enabled)
| ^~
drivers/ufs/core/ufshcd.c:1092:3: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the 'if'
1092 | return;
| ^~~~~~
vim +/if +1052 drivers/ufs/core/ufshcd.c
7a3e97b0dc4bba drivers/scsi/ufs/ufshcd.c Santosh Yaraganavi 2012-02-29 1043
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1044 /**
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1045 * ufshcd_pm_qos_init - initialize PM QoS request
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1046 * @hba: per adapter instance
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1047 */
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1048 void ufshcd_pm_qos_init(struct ufs_hba *hba)
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1049 {
5824c3647e1ad8 drivers/ufs/core/ufshcd.c Zhongqiu Han 2025-09-01 1050 mutex_lock(&hba->pm_qos_mutex);
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1051
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 @1052 if (hba->pm_qos_enabled)
5824c3647e1ad8 drivers/ufs/core/ufshcd.c Zhongqiu Han 2025-09-01 1053 mutex_unlock(&hba->pm_qos_mutex);
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1054 return;
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1055
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1056 cpu_latency_qos_add_request(&hba->pm_qos_req, PM_QOS_DEFAULT_VALUE);
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1057
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1058 if (cpu_latency_qos_request_active(&hba->pm_qos_req))
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1059 hba->pm_qos_enabled = true;
5824c3647e1ad8 drivers/ufs/core/ufshcd.c Zhongqiu Han 2025-09-01 1060
5824c3647e1ad8 drivers/ufs/core/ufshcd.c Zhongqiu Han 2025-09-01 1061 mutex_unlock(&hba->pm_qos_mutex);
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1062 }
2777e73fc154e2 drivers/ufs/core/ufshcd.c Maramaina Naresh 2023-12-19 1063
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling
2025-09-01 8:51 [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling Zhongqiu Han
2025-09-02 6:30 ` kernel test robot
@ 2025-09-02 6:43 ` Ziqi Chen
2025-09-02 8:45 ` Zhongqiu Han
1 sibling, 1 reply; 4+ messages in thread
From: Ziqi Chen @ 2025-09-02 6:43 UTC (permalink / raw)
To: Zhongqiu Han, alim.akhtar, avri.altman, bvanassche,
James.Bottomley, martin.petersen
Cc: peter.wang, tanghuan, liu.song13, quic_nguyenb, viro, huobean,
adrian.hunter, can.guo, ebiggers, neil.armstrong,
angelogioacchino.delregno, quic_narepall, quic_mnaresh,
linux-scsi, linux-kernel, nitin.rawat
On 9/1/2025 4:51 PM, Zhongqiu Han wrote:
> The cpu_latency_qos_add/remove/update_request interfaces lack internal
> synchronization by design, requiring the caller to ensure thread safety.
> The current implementation relies on the `pm_qos_enabled` flag, which is
> insufficient to prevent concurrent access and cannot serve as a proper
> synchronization mechanism. This has led to data races and list corruption
> issues.
>
> A typical race condition call trace is:
>
> [Thread A]
> ufshcd_pm_qos_exit()
> --> cpu_latency_qos_remove_request()
> --> cpu_latency_qos_apply();
> --> pm_qos_update_target()
> --> plist_del <--(1) delete plist node
> --> memset(req, 0, sizeof(*req));
> --> hba->pm_qos_enabled = false;
>
> [Thread B]
> ufshcd_devfreq_target
> --> ufshcd_devfreq_scale
> --> ufshcd_scale_clks
> --> ufshcd_pm_qos_update <--(2) pm_qos_enabled is true
> --> cpu_latency_qos_update_request
> --> pm_qos_update_target
> --> plist_del <--(3) plist node use-after-free
>
> This patch introduces a dedicated mutex to serialize PM QoS operations,
> preventing data races and ensuring safe access to PM QoS resources.
> Additionally, READ_ONCE is used in the sysfs interface to ensure atomic
> read access to pm_qos_enabled flag.
>
> Fixes: 2777e73fc154 ("scsi: ufs: core: Add CPU latency QoS support for UFS driver")
> Signed-off-by: Zhongqiu Han <zhongqiu.han@oss.qualcomm.com>
> ---
> drivers/ufs/core/ufs-sysfs.c | 2 +-
> drivers/ufs/core/ufshcd.c | 16 ++++++++++++++++
> include/ufs/ufshcd.h | 2 ++
> 3 files changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
> index 4bd7d491e3c5..8f7975010513 100644
> --- a/drivers/ufs/core/ufs-sysfs.c
> +++ b/drivers/ufs/core/ufs-sysfs.c
> @@ -512,7 +512,7 @@ static ssize_t pm_qos_enable_show(struct device *dev,
> {
> struct ufs_hba *hba = dev_get_drvdata(dev);
>
> - return sysfs_emit(buf, "%d\n", hba->pm_qos_enabled);
> + return sysfs_emit(buf, "%d\n", READ_ONCE(hba->pm_qos_enabled));
> }
>
> /**
> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
> index 926650412eaa..f259fb1790fa 100644
> --- a/drivers/ufs/core/ufshcd.c
> +++ b/drivers/ufs/core/ufshcd.c
> @@ -1047,14 +1047,18 @@ EXPORT_SYMBOL_GPL(ufshcd_is_hba_active);
> */
> void ufshcd_pm_qos_init(struct ufs_hba *hba)
> {
> + mutex_lock(&hba->pm_qos_mutex);
>
> if (hba->pm_qos_enabled)
> + mutex_unlock(&hba->pm_qos_mutex);
> return;
Missing the curly braces for this If statement.
>
> cpu_latency_qos_add_request(&hba->pm_qos_req, PM_QOS_DEFAULT_VALUE);
>
> if (cpu_latency_qos_request_active(&hba->pm_qos_req))
> hba->pm_qos_enabled = true;
> +
> + mutex_unlock(&hba->pm_qos_mutex);
> }
>
> /**
> @@ -1063,11 +1067,15 @@ void ufshcd_pm_qos_init(struct ufs_hba *hba)
> */
> void ufshcd_pm_qos_exit(struct ufs_hba *hba)
> {
> + mutex_lock(&hba->pm_qos_mutex);
> +
> if (!hba->pm_qos_enabled)
> + mutex_unlock(&hba->pm_qos_mutex);
> return;
Same here.
> cpu_latency_qos_remove_request(&hba->pm_qos_req);
> hba->pm_qos_enabled = false;
> + mutex_unlock(&hba->pm_qos_mutex);
> }
>
> /**
> @@ -1077,10 +1085,14 @@ void ufshcd_pm_qos_exit(struct ufs_hba *hba)
> */
> static void ufshcd_pm_qos_update(struct ufs_hba *hba, bool on)
> {
> + mutex_lock(&hba->pm_qos_mutex);
> +
> if (!hba->pm_qos_enabled)
> + mutex_unlock(&hba->pm_qos_mutex);
> return;
Same here.
> cpu_latency_qos_update_request(&hba->pm_qos_req, on ? 0 : PM_QOS_DEFAULT_VALUE);
> + mutex_unlock(&hba->pm_qos_mutex);
> }
>
> /**
> @@ -10764,6 +10776,10 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
> mutex_init(&hba->ee_ctrl_mutex);
>
> mutex_init(&hba->wb_mutex);
> +
> + /* Initialize mutex for PM QoS request synchronization */
> + mutex_init(&hba->pm_qos_mutex);
> +
> init_rwsem(&hba->clk_scaling_lock);
>
> ufshcd_init_clk_gating(hba);
> diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
> index 30ff169878dc..e81f4346f168 100644
> --- a/include/ufs/ufshcd.h
> +++ b/include/ufs/ufshcd.h
> @@ -962,6 +962,7 @@ enum ufshcd_mcq_opr {
> * @ufs_rtc_update_work: A work for UFS RTC periodic update
> * @pm_qos_req: PM QoS request handle
> * @pm_qos_enabled: flag to check if pm qos is enabled
> + * @pm_qos_mutex: synchronizes PM QoS request and status updates
> * @critical_health_count: count of critical health exceptions
> * @dev_lvl_exception_count: count of device level exceptions since last reset
> * @dev_lvl_exception_id: vendor specific information about the
> @@ -1135,6 +1136,7 @@ struct ufs_hba {
> struct delayed_work ufs_rtc_update_work;
> struct pm_qos_request pm_qos_req;
> bool pm_qos_enabled;
> + struct mutex pm_qos_mutex;
>
> int critical_health_count;
> atomic_t dev_lvl_exception_count;
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling
2025-09-02 6:43 ` Ziqi Chen
@ 2025-09-02 8:45 ` Zhongqiu Han
0 siblings, 0 replies; 4+ messages in thread
From: Zhongqiu Han @ 2025-09-02 8:45 UTC (permalink / raw)
To: Ziqi Chen, alim.akhtar, avri.altman, bvanassche, James.Bottomley,
martin.petersen
Cc: peter.wang, tanghuan, liu.song13, quic_nguyenb, viro, huobean,
adrian.hunter, can.guo, ebiggers, neil.armstrong,
angelogioacchino.delregno, quic_narepall, quic_mnaresh,
linux-scsi, linux-kernel, nitin.rawat, zhongqiu.han
On 9/2/2025 2:43 PM, Ziqi Chen wrote:
>
> On 9/1/2025 4:51 PM, Zhongqiu Han wrote:
>> The cpu_latency_qos_add/remove/update_request interfaces lack internal
>> synchronization by design, requiring the caller to ensure thread safety.
>> The current implementation relies on the `pm_qos_enabled` flag, which is
>> insufficient to prevent concurrent access and cannot serve as a proper
>> synchronization mechanism. This has led to data races and list
>> corruption
>> issues.
>>
>> A typical race condition call trace is:
>>
>> [Thread A]
>> ufshcd_pm_qos_exit()
>> --> cpu_latency_qos_remove_request()
>> --> cpu_latency_qos_apply();
>> --> pm_qos_update_target()
>> --> plist_del <--(1) delete plist node
>> --> memset(req, 0, sizeof(*req));
>> --> hba->pm_qos_enabled = false;
>>
>> [Thread B]
>> ufshcd_devfreq_target
>> --> ufshcd_devfreq_scale
>> --> ufshcd_scale_clks
>> --> ufshcd_pm_qos_update <--(2) pm_qos_enabled is true
>> --> cpu_latency_qos_update_request
>> --> pm_qos_update_target
>> --> plist_del <--(3) plist node use-after-free
>>
>> This patch introduces a dedicated mutex to serialize PM QoS operations,
>> preventing data races and ensuring safe access to PM QoS resources.
>> Additionally, READ_ONCE is used in the sysfs interface to ensure atomic
>> read access to pm_qos_enabled flag.
>>
>> Fixes: 2777e73fc154 ("scsi: ufs: core: Add CPU latency QoS support
>> for UFS driver")
>> Signed-off-by: Zhongqiu Han <zhongqiu.han@oss.qualcomm.com>
>> ---
>> drivers/ufs/core/ufs-sysfs.c | 2 +-
>> drivers/ufs/core/ufshcd.c | 16 ++++++++++++++++
>> include/ufs/ufshcd.h | 2 ++
>> 3 files changed, 19 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
>> index 4bd7d491e3c5..8f7975010513 100644
>> --- a/drivers/ufs/core/ufs-sysfs.c
>> +++ b/drivers/ufs/core/ufs-sysfs.c
>> @@ -512,7 +512,7 @@ static ssize_t pm_qos_enable_show(struct device
>> *dev,
>> {
>> struct ufs_hba *hba = dev_get_drvdata(dev);
>> - return sysfs_emit(buf, "%d\n", hba->pm_qos_enabled);
>> + return sysfs_emit(buf, "%d\n", READ_ONCE(hba->pm_qos_enabled));
>> }
>> /**
>> diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
>> index 926650412eaa..f259fb1790fa 100644
>> --- a/drivers/ufs/core/ufshcd.c
>> +++ b/drivers/ufs/core/ufshcd.c
>> @@ -1047,14 +1047,18 @@ EXPORT_SYMBOL_GPL(ufshcd_is_hba_active);
>> */
>> void ufshcd_pm_qos_init(struct ufs_hba *hba)
>> {
>> + mutex_lock(&hba->pm_qos_mutex);
>> if (hba->pm_qos_enabled)
>> + mutex_unlock(&hba->pm_qos_mutex);
>> return;
> Missing the curly braces for this If statement.
Hi Ziqi,
Thanks for the review, yes, i will fix it on v2
https://lore.kernel.org/all/20250902074829.657343-1-zhongqiu.han@oss.qualcomm.com/
The internal test version does not contain this bug; in fact,
the internal test version is correct.
>> cpu_latency_qos_add_request(&hba->pm_qos_req, PM_QOS_DEFAULT_VALUE);
>> if (cpu_latency_qos_request_active(&hba->pm_qos_req))
>> hba->pm_qos_enabled = true;
>> +
>> + mutex_unlock(&hba->pm_qos_mutex);
>> }
>> /**
>> @@ -1063,11 +1067,15 @@ void ufshcd_pm_qos_init(struct ufs_hba *hba)
>> */
>> void ufshcd_pm_qos_exit(struct ufs_hba *hba)
>> {
>> + mutex_lock(&hba->pm_qos_mutex);
>> +
>> if (!hba->pm_qos_enabled)
>> + mutex_unlock(&hba->pm_qos_mutex);
>> return;
> Same here.
Acked.
>> cpu_latency_qos_remove_request(&hba->pm_qos_req);
>> hba->pm_qos_enabled = false;
>> + mutex_unlock(&hba->pm_qos_mutex);
>> }
>> /**
>> @@ -1077,10 +1085,14 @@ void ufshcd_pm_qos_exit(struct ufs_hba *hba)
>> */
>> static void ufshcd_pm_qos_update(struct ufs_hba *hba, bool on)
>> {
>> + mutex_lock(&hba->pm_qos_mutex);
>> +
>> if (!hba->pm_qos_enabled)
>> + mutex_unlock(&hba->pm_qos_mutex);
>> return;
> Same here.
Acked.
>> cpu_latency_qos_update_request(&hba->pm_qos_req, on ? 0 :
>> PM_QOS_DEFAULT_VALUE);
>> + mutex_unlock(&hba->pm_qos_mutex);
>> }
>> /**
>> @@ -10764,6 +10776,10 @@ int ufshcd_init(struct ufs_hba *hba, void
>> __iomem *mmio_base, unsigned int irq)
>> mutex_init(&hba->ee_ctrl_mutex);
>> mutex_init(&hba->wb_mutex);
>> +
>> + /* Initialize mutex for PM QoS request synchronization */
>> + mutex_init(&hba->pm_qos_mutex);
>> +
>> init_rwsem(&hba->clk_scaling_lock);
>> ufshcd_init_clk_gating(hba);
>> diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
>> index 30ff169878dc..e81f4346f168 100644
>> --- a/include/ufs/ufshcd.h
>> +++ b/include/ufs/ufshcd.h
>> @@ -962,6 +962,7 @@ enum ufshcd_mcq_opr {
>> * @ufs_rtc_update_work: A work for UFS RTC periodic update
>> * @pm_qos_req: PM QoS request handle
>> * @pm_qos_enabled: flag to check if pm qos is enabled
>> + * @pm_qos_mutex: synchronizes PM QoS request and status updates
>> * @critical_health_count: count of critical health exceptions
>> * @dev_lvl_exception_count: count of device level exceptions since
>> last reset
>> * @dev_lvl_exception_id: vendor specific information about the
>> @@ -1135,6 +1136,7 @@ struct ufs_hba {
>> struct delayed_work ufs_rtc_update_work;
>> struct pm_qos_request pm_qos_req;
>> bool pm_qos_enabled;
>> + struct mutex pm_qos_mutex;
>> int critical_health_count;
>> atomic_t dev_lvl_exception_count;
--
Thx and BRs,
Zhongqiu Han
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-09-02 8:46 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-01 8:51 [PATCH] scsi: ufs: core: Fix data race in CPU latency PM QoS request handling Zhongqiu Han
2025-09-02 6:30 ` kernel test robot
2025-09-02 6:43 ` Ziqi Chen
2025-09-02 8:45 ` Zhongqiu Han
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).