public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH V5 0/2] Add CPU latency QoS support for ufs driver
@ 2023-12-13 12:43 Maramaina Naresh
  2023-12-13 12:43 ` [PATCH V5 1/2] ufs: core: " Maramaina Naresh
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Maramaina Naresh @ 2023-12-13 12:43 UTC (permalink / raw)
  To: James E.J. Bottomley, Martin K. Petersen, Peter Wang,
	Matthias Brugger, AngeloGioacchino Del Regno
  Cc: Alim Akhtar, Avri Altman, Bart Van Assche, Stanley Jhu,
	linux-scsi, linux-kernel, linux-mediatek, linux-arm-kernel,
	quic_cang, quic_nguyenb

Add CPU latency QoS support for ufs driver. This improves random io
performance by 15% for ufs.

tiotest benchmark tool io performance results on sm8550 platform:

1. Without PM QoS support
	Type (Speed in)    | Average of 18 iterations
	Random Read(IPOS)  | 37101.3
	Random Write(IPOS) | 41065.13

2. With PM QoS support
	Type (Speed in)    | Average of 18 iterations
	Random Read(IPOS)  | 42943.4
	Random Write(IPOS) | 46784.9
(Improvement with PM QoS = ~15%).

This patch is based on below patch by Stanley Chu [1]. 
Moving the PM QoS code to ufshcd.c and making it generic.

[1] https://lore.kernel.org/r/20220623035052.18802-8-stanley.chu@mediatek.com

Changes from v4:
- Addressed angelogioacchino's comments to update commit text
- Addressed angelogioacchino's comments to code alignment

Changes from v3:
- Removed UFSHCD_CAP_PM_QOS capability flag from patch#2

Changes from v2:
- Addressed bvanassche and mani comments
- Provided sysfs interface to enable/disable PM QoS feature

Changes from v1:
- Addressed bvanassche comments to have the code in core ufshcd
- Design is changed from per-device PM QoS to CPU latency QoS based support
- Reverted existing PM QoS feature from MEDIATEK UFS driver
- Added PM QoS capability for both QCOM and MEDIATEK SoCs

Maramaina Naresh (2):
  ufs: core: Add CPU latency QoS support for ufs driver
  ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS
    support

 drivers/ufs/core/ufshcd.c       | 125 ++++++++++++++++++++++++++++++++
 drivers/ufs/host/ufs-mediatek.c |  17 -----
 drivers/ufs/host/ufs-mediatek.h |   3 -
 include/ufs/ufshcd.h            |   6 ++
 4 files changed, 131 insertions(+), 20 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH V5 1/2] ufs: core: Add CPU latency QoS support for ufs driver
  2023-12-13 12:43 [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Maramaina Naresh
@ 2023-12-13 12:43 ` Maramaina Naresh
  2023-12-15  6:58   ` Peter Wang (王信友)
  2023-12-18 21:55   ` Bart Van Assche
  2023-12-13 12:43 ` [PATCH V5 2/2] ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS support Maramaina Naresh
  2023-12-15  9:05 ` [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Avri Altman
  2 siblings, 2 replies; 8+ messages in thread
From: Maramaina Naresh @ 2023-12-13 12:43 UTC (permalink / raw)
  To: James E.J. Bottomley, Martin K. Petersen, Peter Wang,
	Matthias Brugger, AngeloGioacchino Del Regno
  Cc: Alim Akhtar, Avri Altman, Bart Van Assche, Stanley Jhu,
	linux-scsi, linux-kernel, linux-mediatek, linux-arm-kernel,
	quic_cang, quic_nguyenb

Register ufs driver to CPU latency PM QoS framework to improve
ufs device random io performance.

PM QoS initialization will insert new QoS request into the CPU
latency QoS list with the maximum latency PM_QOS_DEFAULT_VALUE
value.

UFS driver will vote for performance mode on scale up and power
save mode for scale down.

If clock scaling feature is not enabled then voting will be based
on clock on or off condition.

Provided sysfs interface to enable/disable PM QoS feature.

tiotest benchmark tool io performance results on sm8550 platform:

1. Without PM QoS support
	Type (Speed in)    | Average of 18 iterations
	Random Write(IPOS) | 41065.13
	Random Read(IPOS)  | 37101.3

2. With PM QoS support
	Type (Speed in)    | Average of 18 iterations
	Random Write(IPOS) | 46784.9
	Random Read(IPOS)  | 42943.4
(Improvement with PM QoS = ~15%).

Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Co-developed-by: Nitin Rawat <quic_nitirawa@quicinc.com>
Signed-off-by: Nitin Rawat <quic_nitirawa@quicinc.com>
Signed-off-by: Naveen Kumar Goud Arepalli <quic_narepall@quicinc.com>
Signed-off-by: Maramaina Naresh <quic_mnaresh@quicinc.com>
---
 drivers/ufs/core/ufshcd.c | 125 ++++++++++++++++++++++++++++++++++++++
 include/ufs/ufshcd.h      |   6 ++
 2 files changed, 131 insertions(+)

diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index ae9936fc6ffb..a8ee6e02e83e 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -1001,6 +1001,19 @@ static bool ufshcd_is_unipro_pa_params_tuning_req(struct ufs_hba *hba)
 	return ufshcd_get_local_unipro_ver(hba) < UFS_UNIPRO_VER_1_6;
 }
 
+/**
+ * ufshcd_pm_qos_update - update PM QoS request
+ * @hba: per adapter instance
+ * @on: If True, vote for perf PM QoS mode otherwise power save mode
+ */
+static void ufshcd_pm_qos_update(struct ufs_hba *hba, bool on)
+{
+	if (!hba->pm_qos_enabled)
+		return;
+
+	cpu_latency_qos_update_request(&hba->pm_qos_req, on ? 0 : PM_QOS_DEFAULT_VALUE);
+}
+
 /**
  * ufshcd_set_clk_freq - set UFS controller clock frequencies
  * @hba: per adapter instance
@@ -1147,8 +1160,11 @@ static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq,
 					    hba->devfreq->previous_freq);
 		else
 			ufshcd_set_clk_freq(hba, !scale_up);
+		goto out;
 	}
 
+	ufshcd_pm_qos_update(hba, scale_up);
+
 out:
 	trace_ufshcd_profile_clk_scaling(dev_name(hba->dev),
 			(scale_up ? "up" : "down"),
@@ -8615,6 +8631,108 @@ static void ufshcd_set_timestamp_attr(struct ufs_hba *hba)
 	ufshcd_release(hba);
 }
 
+/**
+ * ufshcd_pm_qos_init - initialize PM QoS request
+ * @hba: per adapter instance
+ */
+static void ufshcd_pm_qos_init(struct ufs_hba *hba)
+{
+
+	if (hba->pm_qos_enabled)
+		return;
+
+	cpu_latency_qos_add_request(&hba->pm_qos_req, PM_QOS_DEFAULT_VALUE);
+
+	if (cpu_latency_qos_request_active(&hba->pm_qos_req))
+		hba->pm_qos_enabled = true;
+}
+
+/**
+ * ufshcd_pm_qos_exit - remove request from PM QoS
+ * @hba: per adapter instance
+ */
+static void ufshcd_pm_qos_exit(struct ufs_hba *hba)
+{
+	if (!hba->pm_qos_enabled)
+		return;
+
+	cpu_latency_qos_remove_request(&hba->pm_qos_req);
+	hba->pm_qos_enabled = false;
+}
+
+/**
+ * ufshcd_pm_qos_enable_show - sysfs handler to show pm qos enable value
+ * @dev: device associated with the UFS controller
+ * @attr: sysfs attribute handle
+ * @buf: buffer for sysfs file
+ *
+ * Print 1 if PM QoS feature is enabled, 0 if disabled.
+ *
+ * Returns number of characters written to @buf.
+ */
+static ssize_t ufshcd_pm_qos_enable_show(struct device *dev,
+		struct device_attribute *attr, char *buf)
+{
+	struct ufs_hba *hba = dev_get_drvdata(dev);
+
+	return sysfs_emit(buf, "%d\n", hba->pm_qos_enabled);
+}
+
+/**
+ * ufshcd_pm_qos_enable_store - sysfs handler to store value
+ * @dev: device associated with the UFS controller
+ * @attr: sysfs attribute handle
+ * @buf: buffer for sysfs file
+ * @count: stores buffer characters count
+ *
+ * Input 0 to disable PM QoS and any non-zero positive value to enable.
+ * Default state: 1
+ *
+ * Return: number of characters written to @buf on success, < 0 upon failure.
+ */
+static ssize_t ufshcd_pm_qos_enable_store(struct device *dev,
+		struct device_attribute *attr, const char *buf, size_t count)
+{
+	struct ufs_hba *hba = dev_get_drvdata(dev);
+	u32 value;
+
+	if (kstrtou32(buf, 0, &value))
+		return -EINVAL;
+
+	value = !!value;
+	if (value)
+		ufshcd_pm_qos_init(hba);
+	else
+		ufshcd_pm_qos_exit(hba);
+
+	return count;
+}
+
+/**
+ * ufshcd_init_pm_qos_sysfs - initialize PM QoS sysfs entry
+ * @hba: per adapter instance
+ */
+static void ufshcd_init_pm_qos_sysfs(struct ufs_hba *hba)
+{
+	hba->pm_qos_enable_attr.show = ufshcd_pm_qos_enable_show;
+	hba->pm_qos_enable_attr.store = ufshcd_pm_qos_enable_store;
+	sysfs_attr_init(&hba->pm_qos_enable_attr.attr);
+	hba->pm_qos_enable_attr.attr.name = "pm_qos_enable";
+	hba->pm_qos_enable_attr.attr.mode = 0644;
+	if (device_create_file(hba->dev, &hba->pm_qos_enable_attr))
+		dev_err(hba->dev, "Failed to create sysfs for pm_qos_enable\n");
+}
+
+/**
+ * ufshcd_remove_pm_qos_sysfs - remove PM QoS sysfs entry
+ * @hba: per adapter instance
+ */
+static void ufshcd_remove_pm_qos_sysfs(struct ufs_hba *hba)
+{
+	if (hba->pm_qos_enable_attr.attr.name)
+		device_remove_file(hba->dev, &hba->pm_qos_enable_attr);
+}
+
 /**
  * ufshcd_add_lus - probe and add UFS logical units
  * @hba: per-adapter instance
@@ -9204,6 +9322,8 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
 	if (ret)
 		return ret;
 
+	if (!ufshcd_is_clkscaling_supported(hba))
+		ufshcd_pm_qos_update(hba, on);
 out:
 	if (ret) {
 		list_for_each_entry(clki, head, list) {
@@ -9381,6 +9501,8 @@ static int ufshcd_hba_init(struct ufs_hba *hba)
 static void ufshcd_hba_exit(struct ufs_hba *hba)
 {
 	if (hba->is_powered) {
+		ufshcd_remove_pm_qos_sysfs(hba);
+		ufshcd_pm_qos_exit(hba);
 		ufshcd_exit_clk_scaling(hba);
 		ufshcd_exit_clk_gating(hba);
 		if (hba->eh_wq)
@@ -10030,6 +10152,7 @@ static int ufshcd_suspend(struct ufs_hba *hba)
 	ufshcd_vreg_set_lpm(hba);
 	/* Put the host controller in low power mode if possible */
 	ufshcd_hba_vreg_set_lpm(hba);
+	ufshcd_pm_qos_update(hba, false);
 	return ret;
 }
 
@@ -10576,6 +10699,8 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
 	ufs_sysfs_add_nodes(hba->dev);
 
 	device_enable_async_suspend(dev);
+	ufshcd_pm_qos_init(hba);
+	ufshcd_init_pm_qos_sysfs(hba);
 	return 0;
 
 free_tmf_queue:
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index d862c8ddce03..fa7434a9073d 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -912,6 +912,9 @@ enum ufshcd_mcq_opr {
  * @mcq_base: Multi circular queue registers base address
  * @uhq: array of supported hardware queues
  * @dev_cmd_queue: Queue for issuing device management commands
+ * @pm_qos_enable_attr: sysfs attribute to enable/disable  pm qos
+ * @pm_qos_req: PM QoS request handle
+ * @pm_qos_enabled: flag to check if pm qos is enabled
  */
 struct ufs_hba {
 	void __iomem *mmio_base;
@@ -1076,6 +1079,9 @@ struct ufs_hba {
 	struct ufs_hw_queue *uhq;
 	struct ufs_hw_queue *dev_cmd_queue;
 	struct ufshcd_mcq_opr_info_t mcq_opr[OPR_MAX];
+	struct device_attribute pm_qos_enable_attr;
+	struct pm_qos_request pm_qos_req;
+	bool pm_qos_enabled;
 };
 
 /**
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH V5 2/2] ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS support
  2023-12-13 12:43 [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Maramaina Naresh
  2023-12-13 12:43 ` [PATCH V5 1/2] ufs: core: " Maramaina Naresh
@ 2023-12-13 12:43 ` Maramaina Naresh
  2023-12-15  6:59   ` Peter Wang (王信友)
  2023-12-15  9:05 ` [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Avri Altman
  2 siblings, 1 reply; 8+ messages in thread
From: Maramaina Naresh @ 2023-12-13 12:43 UTC (permalink / raw)
  To: James E.J. Bottomley, Martin K. Petersen, Peter Wang,
	Matthias Brugger, AngeloGioacchino Del Regno
  Cc: Alim Akhtar, Avri Altman, Bart Van Assche, Stanley Jhu,
	linux-scsi, linux-kernel, linux-mediatek, linux-arm-kernel,
	quic_cang, quic_nguyenb

The PM QoS feature found in the MediaTek UFS driver was moved to the UFSHCD
core. Hence remove it from MediaTek UFS driver as it is redundant now.

Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Signed-off-by: Maramaina Naresh <quic_mnaresh@quicinc.com>
---
 drivers/ufs/host/ufs-mediatek.c | 17 -----------------
 drivers/ufs/host/ufs-mediatek.h |  3 ---
 2 files changed, 20 deletions(-)

diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
index fc61790d289b..1e7dadcb644f 100644
--- a/drivers/ufs/host/ufs-mediatek.c
+++ b/drivers/ufs/host/ufs-mediatek.c
@@ -17,7 +17,6 @@
 #include <linux/of_platform.h>
 #include <linux/phy/phy.h>
 #include <linux/platform_device.h>
-#include <linux/pm_qos.h>
 #include <linux/regulator/consumer.h>
 #include <linux/reset.h>
 #include <linux/soc/mediatek/mtk_sip_svc.h>
@@ -626,21 +625,9 @@ static void ufs_mtk_init_host_caps(struct ufs_hba *hba)
 	dev_info(hba->dev, "caps: 0x%x", host->caps);
 }
 
-static void ufs_mtk_boost_pm_qos(struct ufs_hba *hba, bool boost)
-{
-	struct ufs_mtk_host *host = ufshcd_get_variant(hba);
-
-	if (!host || !host->pm_qos_init)
-		return;
-
-	cpu_latency_qos_update_request(&host->pm_qos_req,
-				       boost ? 0 : PM_QOS_DEFAULT_VALUE);
-}
-
 static void ufs_mtk_scale_perf(struct ufs_hba *hba, bool scale_up)
 {
 	ufs_mtk_boost_crypt(hba, scale_up);
-	ufs_mtk_boost_pm_qos(hba, scale_up);
 }
 
 static void ufs_mtk_pwr_ctrl(struct ufs_hba *hba, bool on)
@@ -959,10 +946,6 @@ static int ufs_mtk_init(struct ufs_hba *hba)
 
 	host->ip_ver = ufshcd_readl(hba, REG_UFS_MTK_IP_VER);
 
-	/* Initialize pm-qos request */
-	cpu_latency_qos_add_request(&host->pm_qos_req, PM_QOS_DEFAULT_VALUE);
-	host->pm_qos_init = true;
-
 	goto out;
 
 out_variant_clear:
diff --git a/drivers/ufs/host/ufs-mediatek.h b/drivers/ufs/host/ufs-mediatek.h
index f76e80d91729..38eab95b0f79 100644
--- a/drivers/ufs/host/ufs-mediatek.h
+++ b/drivers/ufs/host/ufs-mediatek.h
@@ -7,7 +7,6 @@
 #define _UFS_MEDIATEK_H
 
 #include <linux/bitops.h>
-#include <linux/pm_qos.h>
 #include <linux/soc/mediatek/mtk_sip_svc.h>
 
 /*
@@ -167,7 +166,6 @@ struct ufs_mtk_mcq_intr_info {
 
 struct ufs_mtk_host {
 	struct phy *mphy;
-	struct pm_qos_request pm_qos_req;
 	struct regulator *reg_va09;
 	struct reset_control *hci_reset;
 	struct reset_control *unipro_reset;
@@ -178,7 +176,6 @@ struct ufs_mtk_host {
 	struct ufs_mtk_hw_ver hw_ver;
 	enum ufs_mtk_host_caps caps;
 	bool mphy_powered_on;
-	bool pm_qos_init;
 	bool unipro_lpm;
 	bool ref_clk_enabled;
 	u16 ref_clk_ungating_wait_us;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH V5 1/2] ufs: core: Add CPU latency QoS support for ufs driver
  2023-12-13 12:43 ` [PATCH V5 1/2] ufs: core: " Maramaina Naresh
@ 2023-12-15  6:58   ` Peter Wang (王信友)
  2023-12-18 21:55   ` Bart Van Assche
  1 sibling, 0 replies; 8+ messages in thread
From: Peter Wang (王信友) @ 2023-12-15  6:58 UTC (permalink / raw)
  To: matthias.bgg@gmail.com, jejb@linux.ibm.com,
	angelogioacchino.delregno@collabora.com, quic_mnaresh@quicinc.com,
	martin.petersen@oracle.com
  Cc: linux-kernel@vger.kernel.org, linux-mediatek@lists.infradead.org,
	quic_nguyenb@quicinc.com, avri.altman@wdc.com, bvanassche@acm.org,
	linux-scsi@vger.kernel.org, alim.akhtar@samsung.com,
	chu.stanley@gmail.com, linux-arm-kernel@lists.infradead.org,
	quic_cang@quicinc.com

On Wed, 2023-12-13 at 18:13 +0530, Maramaina Naresh wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  Register ufs driver to CPU latency PM QoS framework to improve
> ufs device random io performance.
> 
> PM QoS initialization will insert new QoS request into the CPU
> latency QoS list with the maximum latency PM_QOS_DEFAULT_VALUE
> value.
> 
> UFS driver will vote for performance mode on scale up and power
> save mode for scale down.
> 
> If clock scaling feature is not enabled then voting will be based
> on clock on or off condition.
> 
> Provided sysfs interface to enable/disable PM QoS feature.
> 
> tiotest benchmark tool io performance results on sm8550 platform:
> 
> 1. Without PM QoS support
> 	Type (Speed in)    | Average of 18 iterations
> 	Random Write(IPOS) | 41065.13
> 	Random Read(IPOS)  | 37101.3
> 
> 2. With PM QoS support
> 	Type (Speed in)    | Average of 18 iterations
> 	Random Write(IPOS) | 46784.9
> 	Random Read(IPOS)  | 42943.4
> (Improvement with PM QoS = ~15%).
> 

Reviewed-by: Peter Wang <peter.wang@mediatek.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V5 2/2] ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS support
  2023-12-13 12:43 ` [PATCH V5 2/2] ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS support Maramaina Naresh
@ 2023-12-15  6:59   ` Peter Wang (王信友)
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Wang (王信友) @ 2023-12-15  6:59 UTC (permalink / raw)
  To: matthias.bgg@gmail.com, jejb@linux.ibm.com,
	angelogioacchino.delregno@collabora.com, quic_mnaresh@quicinc.com,
	martin.petersen@oracle.com
  Cc: linux-kernel@vger.kernel.org, linux-mediatek@lists.infradead.org,
	quic_nguyenb@quicinc.com, avri.altman@wdc.com, bvanassche@acm.org,
	linux-scsi@vger.kernel.org, alim.akhtar@samsung.com,
	chu.stanley@gmail.com, linux-arm-kernel@lists.infradead.org,
	quic_cang@quicinc.com

On Wed, 2023-12-13 at 18:13 +0530, Maramaina Naresh wrote:
>  	 
> External email : Please do not click links or open attachments until
> you have verified the sender or the content.
>  The PM QoS feature found in the MediaTek UFS driver was moved to the
> UFSHCD
> core. Hence remove it from MediaTek UFS driver as it is redundant
> now.
> 
> 

Reviewed-by: Peter Wang <peter.wang@mediatek.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: [PATCH V5 0/2] Add CPU latency QoS support for ufs driver
  2023-12-13 12:43 [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Maramaina Naresh
  2023-12-13 12:43 ` [PATCH V5 1/2] ufs: core: " Maramaina Naresh
  2023-12-13 12:43 ` [PATCH V5 2/2] ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS support Maramaina Naresh
@ 2023-12-15  9:05 ` Avri Altman
  2023-12-17 17:03   ` Naresh Maramaina
  2 siblings, 1 reply; 8+ messages in thread
From: Avri Altman @ 2023-12-15  9:05 UTC (permalink / raw)
  To: Maramaina Naresh, James E.J. Bottomley, Martin K. Petersen,
	Peter Wang, Matthias Brugger, AngeloGioacchino Del Regno
  Cc: Alim Akhtar, Bart Van Assche, Stanley Jhu,
	linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, quic_cang@quicinc.com,
	quic_nguyenb@quicinc.com

> Add CPU latency QoS support for ufs driver. This improves random io
> performance by 15% for ufs.
> 
> tiotest benchmark tool io performance results on sm8550 platform:
Will it possible to provide test results for non-ufs4.0 platforms?
e.g. for SM8250, just to know if it would make sense to backport this to earlier releases.

Thanks,
Avri

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V5 0/2] Add CPU latency QoS support for ufs driver
  2023-12-15  9:05 ` [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Avri Altman
@ 2023-12-17 17:03   ` Naresh Maramaina
  0 siblings, 0 replies; 8+ messages in thread
From: Naresh Maramaina @ 2023-12-17 17:03 UTC (permalink / raw)
  To: Avri Altman, James E.J. Bottomley, Martin K. Petersen, Peter Wang,
	Matthias Brugger, AngeloGioacchino Del Regno
  Cc: Alim Akhtar, Bart Van Assche, Stanley Jhu,
	linux-scsi@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mediatek@lists.infradead.org,
	linux-arm-kernel@lists.infradead.org, quic_cang@quicinc.com,
	quic_nguyenb@quicinc.com

On 12/15/2023 2:35 PM, Avri Altman wrote:
>> Add CPU latency QoS support for ufs driver. This improves random io
>> performance by 15% for ufs.
>>
>> tiotest benchmark tool io performance results on sm8550 platform:
> Will it possible to provide test results for non-ufs4.0 platforms?
> e.g. for SM8250, just to know if it would make sense to backport this to earlier releases.
> 

Hi Avri,

Performed tiotest benchmark tool io performance test on SM8450 platform 
and see good improvement there as well.

> Thanks,
> Avri

Thanks,
Naresh.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH V5 1/2] ufs: core: Add CPU latency QoS support for ufs driver
  2023-12-13 12:43 ` [PATCH V5 1/2] ufs: core: " Maramaina Naresh
  2023-12-15  6:58   ` Peter Wang (王信友)
@ 2023-12-18 21:55   ` Bart Van Assche
  1 sibling, 0 replies; 8+ messages in thread
From: Bart Van Assche @ 2023-12-18 21:55 UTC (permalink / raw)
  To: Maramaina Naresh, James E.J. Bottomley, Martin K. Petersen,
	Peter Wang, Matthias Brugger, AngeloGioacchino Del Regno
  Cc: Alim Akhtar, Avri Altman, Stanley Jhu, linux-scsi, linux-kernel,
	linux-mediatek, linux-arm-kernel, quic_cang, quic_nguyenb

On 12/13/23 04:43, Maramaina Naresh wrote:
> +static ssize_t ufshcd_pm_qos_enable_store(struct device *dev,
> +		struct device_attribute *attr, const char *buf, size_t count)
> +{
> +	struct ufs_hba *hba = dev_get_drvdata(dev);
> +	u32 value;
> +
> +	if (kstrtou32(buf, 0, &value))
> +		return -EINVAL;
> +
> +	value = !!value;
> +	if (value)
> +		ufshcd_pm_qos_init(hba);
> +	else
> +		ufshcd_pm_qos_exit(hba);
> +
> +	return count;
> +}

Please use kstrtobool() instead of kstrtou32().

> +static void ufshcd_init_pm_qos_sysfs(struct ufs_hba *hba)
> +{
> +	hba->pm_qos_enable_attr.show = ufshcd_pm_qos_enable_show;
> +	hba->pm_qos_enable_attr.store = ufshcd_pm_qos_enable_store;
> +	sysfs_attr_init(&hba->pm_qos_enable_attr.attr);
> +	hba->pm_qos_enable_attr.attr.name = "pm_qos_enable";
> +	hba->pm_qos_enable_attr.attr.mode = 0644;
> +	if (device_create_file(hba->dev, &hba->pm_qos_enable_attr))
> +		dev_err(hba->dev, "Failed to create sysfs for pm_qos_enable\n");
> +}

Calling device_create_file() and device_remove_file() is not acceptable because of
the race conditions these calls introduce for udev rules. Please add this attribute
into an existing group and update the is_visible callback function of that group.
See also ufs_sysfs_groups[].

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-12-18 21:55 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-12-13 12:43 [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Maramaina Naresh
2023-12-13 12:43 ` [PATCH V5 1/2] ufs: core: " Maramaina Naresh
2023-12-15  6:58   ` Peter Wang (王信友)
2023-12-18 21:55   ` Bart Van Assche
2023-12-13 12:43 ` [PATCH V5 2/2] ufs: ufs-mediatek: Migrate to UFSHCD generic CPU latency PM QoS support Maramaina Naresh
2023-12-15  6:59   ` Peter Wang (王信友)
2023-12-15  9:05 ` [PATCH V5 0/2] Add CPU latency QoS support for ufs driver Avri Altman
2023-12-17 17:03   ` Naresh Maramaina

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox