* [PATCH v4 01/12] scsi: ufs: core: Introduce a new ufshcd vops negotiate_pwr_mode()
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
@ 2026-03-21 3:10 ` Can Guo
2026-03-23 9:10 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 08/12] scsi: ufs: ufs-qcom: Fixup PAM-4 TX L0_L1_L2_L3 adaptation pattern length Can Guo
` (4 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Can Guo @ 2026-03-21 3:10 UTC (permalink / raw)
To: avri.altman, bvanassche, beanhuo, peter.wang, martin.petersen,
mani
Cc: linux-scsi, Can Guo, Alim Akhtar, James E.J. Bottomley,
Sai Krishna Potthuri, Ajay Neeli, Peter Griffin,
Krzysztof Kozlowski, Chaotian Jing, Stanley Jhu, Orson Zhai,
Baolin Wang, Chunyan Zhang, Matthias Brugger,
AngeloGioacchino Del Regno, Bao D. Nguyen, Adrian Hunter,
Archana Patni, open list,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
moderated list:ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES,
moderated list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...
Most vendor specific implemenations of vops pwr_change_notify(PRE_CHANGE)
are fulfilling two things at once:
- Vendor specific target power mode negotiation
- Vendor specific power mode change preparation
When TX Equalization is added into consideration, before power mode change
to a target power mode, TX Equalization Training (EQTR) needs be done for
that target power mode. In addition, UFSHCI spec requires to start TX EQTR
from HS-G1 (the most reliable High Speed Gear).
Adding TX EQTR before pwr_change_notify(PRE_CHANGE) is not applicable
because we don't know the negotiated power mode yet.
Adding TX EQTR post pwr_change_notify(PRE_CHANGE) is inappropriate
because pwr_change_notify(PRE_CHANGE) has finished preparation for a power
mode change to negotiated power mode, yet we are changing power mode to
HS-G1 for TX EQTR.
Add a new vops negotiate_pwr_mode() so that vendor specific power mode
negotiation can be fulfilled in its vendor specific implementations.
Later on, TX EQTR can be added post vops negotiate_pwr_mode() and before
vops pwr_change_notify(PRE_CHANGE).
Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
---
drivers/ufs/core/ufshcd-priv.h | 14 +++++-
drivers/ufs/core/ufshcd.c | 70 ++++++++++++++++++++++++------
drivers/ufs/host/ufs-amd-versal2.c | 3 --
drivers/ufs/host/ufs-exynos.c | 34 +++++++--------
drivers/ufs/host/ufs-hisi.c | 23 +++++-----
drivers/ufs/host/ufs-mediatek.c | 40 ++++++++---------
drivers/ufs/host/ufs-qcom.c | 24 +++++-----
drivers/ufs/host/ufs-sprd.c | 3 --
drivers/ufs/host/ufshcd-pci.c | 6 +--
include/ufs/ufshcd.h | 17 +++++---
10 files changed, 143 insertions(+), 91 deletions(-)
diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
index 7d6d19361af9..3b6958d9297a 100644
--- a/drivers/ufs/core/ufshcd-priv.h
+++ b/drivers/ufs/core/ufshcd-priv.h
@@ -167,14 +167,24 @@ static inline int ufshcd_vops_link_startup_notify(struct ufs_hba *hba,
return 0;
}
+static inline int ufshcd_vops_negotiate_pwr_mode(struct ufs_hba *hba,
+ const struct ufs_pa_layer_attr *dev_max_params,
+ struct ufs_pa_layer_attr *dev_req_params)
+{
+ if (hba->vops && hba->vops->negotiate_pwr_mode)
+ return hba->vops->negotiate_pwr_mode(hba, dev_max_params,
+ dev_req_params);
+
+ return -ENOTSUPP;
+}
+
static inline int ufshcd_vops_pwr_change_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
if (hba->vops && hba->vops->pwr_change_notify)
return hba->vops->pwr_change_notify(hba, status,
- dev_max_params, dev_req_params);
+ dev_req_params);
return -ENOTSUPP;
}
diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index 8349fe2090db..91b5d5b02d22 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -335,8 +335,6 @@ static void ufshcd_suspend_clkscaling(struct ufs_hba *hba);
static int ufshcd_scale_clks(struct ufs_hba *hba, unsigned long freq,
bool scale_up);
static irqreturn_t ufshcd_intr(int irq, void *__hba);
-static int ufshcd_change_power_mode(struct ufs_hba *hba,
- struct ufs_pa_layer_attr *pwr_mode);
static int ufshcd_setup_hba_vreg(struct ufs_hba *hba, bool on);
static int ufshcd_setup_vreg(struct ufs_hba *hba, bool on);
static inline int ufshcd_config_vreg_hpm(struct ufs_hba *hba,
@@ -4662,8 +4660,26 @@ static int ufshcd_get_max_pwr_mode(struct ufs_hba *hba)
return 0;
}
-static int ufshcd_change_power_mode(struct ufs_hba *hba,
- struct ufs_pa_layer_attr *pwr_mode)
+/**
+ * ufshcd_dme_change_power_mode() - UniPro DME Power Mode change sequence
+ * @hba: per-adapter instance
+ * @pwr_mode: pointer to the target power mode (gear/lane) attributes
+ *
+ * This function handles the low-level DME (Device Management Entity)
+ * configuration required to transition the UFS link to a new power mode. It
+ * performs the following steps:
+ * 1. Checks if the requested mode matches the current state.
+ * 2. Sets M-PHY and UniPro attributes including Gear (PA_RXGEAR/TXGEAR),
+ * Lanes, Termination, and HS Series (PA_HSSERIES).
+ * 3. Configures default UniPro timeout values (DL_FC0, etc.) unless
+ * explicitly skipped via quirks.
+ * 4. Triggers the actual hardware mode change via ufshcd_uic_change_pwr_mode().
+ * 5. Updates the HBA's cached power information on success.
+ *
+ * Return: 0 on success, non-zero error code on failure.
+ */
+static int ufshcd_dme_change_power_mode(struct ufs_hba *hba,
+ struct ufs_pa_layer_attr *pwr_mode)
{
int ret;
@@ -4747,6 +4763,34 @@ static int ufshcd_change_power_mode(struct ufs_hba *hba,
return ret;
}
+/**
+ * ufshcd_change_power_mode() - Change UFS Link Power Mode
+ * @hba: per-adapter instance
+ * @pwr_mode: pointer to the target power mode (gear/lane) attributes
+ *
+ * This function handles the high-level sequence for changing the UFS link
+ * power mode. It triggers vendor-specific pre-change notification,
+ * executes the DME (Device Management Entity) power mode change sequence,
+ * and, upon success, triggers vendor-specific post-change notification.
+ *
+ * Return: 0 on success, non-zero error code on failure.
+ */
+int ufshcd_change_power_mode(struct ufs_hba *hba,
+ struct ufs_pa_layer_attr *pwr_mode)
+{
+ int ret;
+
+ ufshcd_vops_pwr_change_notify(hba, PRE_CHANGE, pwr_mode);
+
+ ret = ufshcd_dme_change_power_mode(hba, pwr_mode);
+
+ if (!ret)
+ ufshcd_vops_pwr_change_notify(hba, POST_CHANGE, pwr_mode);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ufshcd_change_power_mode);
+
/**
* ufshcd_config_pwr_mode - configure a new power mode
* @hba: per-adapter instance
@@ -4760,19 +4804,17 @@ int ufshcd_config_pwr_mode(struct ufs_hba *hba,
struct ufs_pa_layer_attr final_params = { 0 };
int ret;
- ret = ufshcd_vops_pwr_change_notify(hba, PRE_CHANGE,
- desired_pwr_mode, &final_params);
+ ret = ufshcd_vops_negotiate_pwr_mode(hba, desired_pwr_mode,
+ &final_params);
+ if (ret) {
+ if (ret != -ENOTSUPP)
+ dev_err(hba->dev, "Failed to negotiate power mode: %d, use desired as is\n",
+ ret);
- if (ret)
memcpy(&final_params, desired_pwr_mode, sizeof(final_params));
+ }
- ret = ufshcd_change_power_mode(hba, &final_params);
-
- if (!ret)
- ufshcd_vops_pwr_change_notify(hba, POST_CHANGE, NULL,
- &final_params);
-
- return ret;
+ return ufshcd_change_power_mode(hba, &final_params);
}
EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);
diff --git a/drivers/ufs/host/ufs-amd-versal2.c b/drivers/ufs/host/ufs-amd-versal2.c
index 40543db621a1..52031b7256fd 100644
--- a/drivers/ufs/host/ufs-amd-versal2.c
+++ b/drivers/ufs/host/ufs-amd-versal2.c
@@ -443,7 +443,6 @@ static int ufs_versal2_phy_ratesel(struct ufs_hba *hba, u32 activelanes, u32 rx_
}
static int ufs_versal2_pwr_change_notify(struct ufs_hba *hba, enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
struct ufs_versal2_host *host = ufshcd_get_variant(hba);
@@ -451,8 +450,6 @@ static int ufs_versal2_pwr_change_notify(struct ufs_hba *hba, enum ufs_notify_ch
int ret = 0;
if (status == PRE_CHANGE) {
- memcpy(dev_req_params, dev_max_params, sizeof(struct ufs_pa_layer_attr));
-
/* If it is not a calibrated part, switch PWRMODE to SLOW_MODE */
if (!host->attcompval0 && !host->attcompval1 && !host->ctlecompval0 &&
!host->ctlecompval1) {
diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
index 76fee3a79c77..77a6c8e44485 100644
--- a/drivers/ufs/host/ufs-exynos.c
+++ b/drivers/ufs/host/ufs-exynos.c
@@ -818,12 +818,10 @@ static u32 exynos_ufs_get_hs_gear(struct ufs_hba *hba)
}
static int exynos_ufs_pre_pwr_mode(struct ufs_hba *hba,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
struct exynos_ufs *ufs = ufshcd_get_variant(hba);
struct phy *generic_phy = ufs->phy;
- struct ufs_host_params host_params;
int ret;
if (!dev_req_params) {
@@ -832,18 +830,6 @@ static int exynos_ufs_pre_pwr_mode(struct ufs_hba *hba,
goto out;
}
- ufshcd_init_host_params(&host_params);
-
- /* This driver only support symmetric gear setting e.g. hs_tx_gear == hs_rx_gear */
- host_params.hs_tx_gear = exynos_ufs_get_hs_gear(hba);
- host_params.hs_rx_gear = exynos_ufs_get_hs_gear(hba);
-
- ret = ufshcd_negotiate_pwr_params(&host_params, dev_max_params, dev_req_params);
- if (ret) {
- pr_err("%s: failed to determine capabilities\n", __func__);
- goto out;
- }
-
if (ufs->drv_data->pre_pwr_change)
ufs->drv_data->pre_pwr_change(ufs, dev_req_params);
@@ -1677,17 +1663,30 @@ static int exynos_ufs_link_startup_notify(struct ufs_hba *hba,
return ret;
}
+static int exynos_ufs_negotiate_pwr_mode(struct ufs_hba *hba,
+ const struct ufs_pa_layer_attr *dev_max_params,
+ struct ufs_pa_layer_attr *dev_req_params)
+{
+ struct ufs_host_params host_params;
+
+ ufshcd_init_host_params(&host_params);
+
+ /* This driver only support symmetric gear setting e.g. hs_tx_gear == hs_rx_gear */
+ host_params.hs_tx_gear = exynos_ufs_get_hs_gear(hba);
+ host_params.hs_rx_gear = exynos_ufs_get_hs_gear(hba);
+
+ return ufshcd_negotiate_pwr_params(&host_params, dev_max_params, dev_req_params);
+}
+
static int exynos_ufs_pwr_change_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
int ret = 0;
switch (status) {
case PRE_CHANGE:
- ret = exynos_ufs_pre_pwr_mode(hba, dev_max_params,
- dev_req_params);
+ ret = exynos_ufs_pre_pwr_mode(hba, dev_req_params);
break;
case POST_CHANGE:
ret = exynos_ufs_post_pwr_mode(hba, dev_req_params);
@@ -2015,6 +2014,7 @@ static const struct ufs_hba_variant_ops ufs_hba_exynos_ops = {
.exit = exynos_ufs_exit,
.hce_enable_notify = exynos_ufs_hce_enable_notify,
.link_startup_notify = exynos_ufs_link_startup_notify,
+ .negotiate_pwr_mode = exynos_ufs_negotiate_pwr_mode,
.pwr_change_notify = exynos_ufs_pwr_change_notify,
.setup_clocks = exynos_ufs_setup_clocks,
.setup_xfer_req = exynos_ufs_specify_nexus_t_xfer_req,
diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c
index 6f2e6bf31225..993e20ac211d 100644
--- a/drivers/ufs/host/ufs-hisi.c
+++ b/drivers/ufs/host/ufs-hisi.c
@@ -298,6 +298,17 @@ static void ufs_hisi_set_dev_cap(struct ufs_host_params *host_params)
ufshcd_init_host_params(host_params);
}
+static int ufs_hisi_negotiate_pwr_mode(struct ufs_hba *hba,
+ const struct ufs_pa_layer_attr *dev_max_params,
+ struct ufs_pa_layer_attr *dev_req_params)
+{
+ struct ufs_host_params host_params;
+
+ ufs_hisi_set_dev_cap(&host_params);
+
+ return ufshcd_negotiate_pwr_params(&host_params, dev_max_params, dev_req_params);
+}
+
static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba)
{
struct ufs_hisi_host *host = ufshcd_get_variant(hba);
@@ -362,10 +373,8 @@ static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba)
static int ufs_hisi_pwr_change_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
- struct ufs_host_params host_params;
int ret = 0;
if (!dev_req_params) {
@@ -377,14 +386,6 @@ static int ufs_hisi_pwr_change_notify(struct ufs_hba *hba,
switch (status) {
case PRE_CHANGE:
- ufs_hisi_set_dev_cap(&host_params);
- ret = ufshcd_negotiate_pwr_params(&host_params, dev_max_params, dev_req_params);
- if (ret) {
- dev_err(hba->dev,
- "%s: failed to determine capabilities\n", __func__);
- goto out;
- }
-
ufs_hisi_pwr_change_pre_change(hba);
break;
case POST_CHANGE:
@@ -543,6 +544,7 @@ static const struct ufs_hba_variant_ops ufs_hba_hi3660_vops = {
.name = "hi3660",
.init = ufs_hi3660_init,
.link_startup_notify = ufs_hisi_link_startup_notify,
+ .negotiate_pwr_mode = ufs_hisi_negotiate_pwr_mode,
.pwr_change_notify = ufs_hisi_pwr_change_notify,
.suspend = ufs_hisi_suspend,
.resume = ufs_hisi_resume,
@@ -552,6 +554,7 @@ static const struct ufs_hba_variant_ops ufs_hba_hi3670_vops = {
.name = "hi3670",
.init = ufs_hi3670_init,
.link_startup_notify = ufs_hisi_link_startup_notify,
+ .negotiate_pwr_mode = ufs_hisi_negotiate_pwr_mode,
.pwr_change_notify = ufs_hisi_pwr_change_notify,
.suspend = ufs_hisi_suspend,
.resume = ufs_hisi_resume,
diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
index 05892b9ac528..7b45cf0428af 100644
--- a/drivers/ufs/host/ufs-mediatek.c
+++ b/drivers/ufs/host/ufs-mediatek.c
@@ -1317,6 +1317,23 @@ static int ufs_mtk_init(struct ufs_hba *hba)
return err;
}
+static int ufs_mtk_negotiate_pwr_mode(struct ufs_hba *hba,
+ const struct ufs_pa_layer_attr *dev_max_params,
+ struct ufs_pa_layer_attr *dev_req_params)
+{
+ struct ufs_host_params host_params;
+
+ ufshcd_init_host_params(&host_params);
+ host_params.hs_rx_gear = UFS_HS_G5;
+ host_params.hs_tx_gear = UFS_HS_G5;
+
+ if (dev_max_params->pwr_rx == SLOW_MODE ||
+ dev_max_params->pwr_tx == SLOW_MODE)
+ host_params.desired_working_mode = UFS_PWM_MODE;
+
+ return ufshcd_negotiate_pwr_params(&host_params, dev_max_params, dev_req_params);
+}
+
static bool ufs_mtk_pmc_via_fastauto(struct ufs_hba *hba,
struct ufs_pa_layer_attr *dev_req_params)
{
@@ -1372,26 +1389,10 @@ static void ufs_mtk_adjust_sync_length(struct ufs_hba *hba)
}
static int ufs_mtk_pre_pwr_change(struct ufs_hba *hba,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
struct ufs_mtk_host *host = ufshcd_get_variant(hba);
- struct ufs_host_params host_params;
- int ret;
-
- ufshcd_init_host_params(&host_params);
- host_params.hs_rx_gear = UFS_HS_G5;
- host_params.hs_tx_gear = UFS_HS_G5;
-
- if (dev_max_params->pwr_rx == SLOW_MODE ||
- dev_max_params->pwr_tx == SLOW_MODE)
- host_params.desired_working_mode = UFS_PWM_MODE;
-
- ret = ufshcd_negotiate_pwr_params(&host_params, dev_max_params, dev_req_params);
- if (ret) {
- pr_info("%s: failed to determine capabilities\n",
- __func__);
- }
+ int ret = 0;
if (ufs_mtk_pmc_via_fastauto(hba, dev_req_params)) {
ufs_mtk_adjust_sync_length(hba);
@@ -1503,7 +1504,6 @@ static int ufs_mtk_auto_hibern8_disable(struct ufs_hba *hba)
static int ufs_mtk_pwr_change_notify(struct ufs_hba *hba,
enum ufs_notify_change_status stage,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
int ret = 0;
@@ -1515,8 +1515,7 @@ static int ufs_mtk_pwr_change_notify(struct ufs_hba *hba,
reg = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER);
ufs_mtk_auto_hibern8_disable(hba);
}
- ret = ufs_mtk_pre_pwr_change(hba, dev_max_params,
- dev_req_params);
+ ret = ufs_mtk_pre_pwr_change(hba, dev_req_params);
break;
case POST_CHANGE:
if (ufshcd_is_auto_hibern8_supported(hba))
@@ -2318,6 +2317,7 @@ static const struct ufs_hba_variant_ops ufs_hba_mtk_vops = {
.setup_clocks = ufs_mtk_setup_clocks,
.hce_enable_notify = ufs_mtk_hce_enable_notify,
.link_startup_notify = ufs_mtk_link_startup_notify,
+ .negotiate_pwr_mode = ufs_mtk_negotiate_pwr_mode,
.pwr_change_notify = ufs_mtk_pwr_change_notify,
.apply_dev_quirks = ufs_mtk_apply_dev_quirks,
.fixup_dev_quirks = ufs_mtk_fixup_dev_quirks,
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 375fd24ba458..cdc769886e82 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -966,13 +966,21 @@ static void ufs_qcom_set_tx_hs_equalizer(struct ufs_hba *hba, u32 gear, u32 tx_l
}
}
-static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,
- enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
- struct ufs_pa_layer_attr *dev_req_params)
+static int ufs_qcom_negotiate_pwr_mode(struct ufs_hba *hba,
+ const struct ufs_pa_layer_attr *dev_max_params,
+ struct ufs_pa_layer_attr *dev_req_params)
{
struct ufs_qcom_host *host = ufshcd_get_variant(hba);
struct ufs_host_params *host_params = &host->host_params;
+
+ return ufshcd_negotiate_pwr_params(host_params, dev_max_params, dev_req_params);
+}
+
+static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,
+ enum ufs_notify_change_status status,
+ struct ufs_pa_layer_attr *dev_req_params)
+{
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
int ret = 0;
if (!dev_req_params) {
@@ -982,13 +990,6 @@ static int ufs_qcom_pwr_change_notify(struct ufs_hba *hba,
switch (status) {
case PRE_CHANGE:
- ret = ufshcd_negotiate_pwr_params(host_params, dev_max_params, dev_req_params);
- if (ret) {
- dev_err(hba->dev, "%s: failed to determine capabilities\n",
- __func__);
- return ret;
- }
-
/*
* During UFS driver probe, always update the PHY gear to match the negotiated
* gear, so that, if quirk UFSHCD_QUIRK_REINIT_AFTER_MAX_GEAR_SWITCH is enabled,
@@ -2341,6 +2342,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.setup_clocks = ufs_qcom_setup_clocks,
.hce_enable_notify = ufs_qcom_hce_enable_notify,
.link_startup_notify = ufs_qcom_link_startup_notify,
+ .negotiate_pwr_mode = ufs_qcom_negotiate_pwr_mode,
.pwr_change_notify = ufs_qcom_pwr_change_notify,
.apply_dev_quirks = ufs_qcom_apply_dev_quirks,
.fixup_dev_quirks = ufs_qcom_fixup_dev_quirks,
diff --git a/drivers/ufs/host/ufs-sprd.c b/drivers/ufs/host/ufs-sprd.c
index 65bd8fb96b99..a5e8c591bead 100644
--- a/drivers/ufs/host/ufs-sprd.c
+++ b/drivers/ufs/host/ufs-sprd.c
@@ -161,14 +161,11 @@ static int ufs_sprd_common_init(struct ufs_hba *hba)
static int sprd_ufs_pwr_change_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
struct ufs_sprd_host *host = ufshcd_get_variant(hba);
if (status == PRE_CHANGE) {
- memcpy(dev_req_params, dev_max_params,
- sizeof(struct ufs_pa_layer_attr));
if (host->unipro_ver >= UFS_UNIPRO_VER_1_8)
ufshcd_dme_configure_adapt(hba, dev_req_params->gear_tx,
PA_INITIAL_ADAPT);
diff --git a/drivers/ufs/host/ufshcd-pci.c b/drivers/ufs/host/ufshcd-pci.c
index 5f65dfad1a71..8a4f2381a32e 100644
--- a/drivers/ufs/host/ufshcd-pci.c
+++ b/drivers/ufs/host/ufshcd-pci.c
@@ -145,7 +145,7 @@ static int ufs_intel_set_lanes(struct ufs_hba *hba, u32 lanes)
pwr_info.lane_rx = lanes;
pwr_info.lane_tx = lanes;
- ret = ufshcd_config_pwr_mode(hba, &pwr_info);
+ ret = ufshcd_change_power_mode(hba, &pwr_info);
if (ret)
dev_err(hba->dev, "%s: Setting %u lanes, err = %d\n",
__func__, lanes, ret);
@@ -154,17 +154,15 @@ static int ufs_intel_set_lanes(struct ufs_hba *hba, u32 lanes)
static int ufs_intel_lkf_pwr_change_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *dev_max_params,
struct ufs_pa_layer_attr *dev_req_params)
{
int err = 0;
switch (status) {
case PRE_CHANGE:
- if (ufshcd_is_hs_mode(dev_max_params) &&
+ if (ufshcd_is_hs_mode(dev_req_params) &&
(hba->pwr_info.lane_rx != 2 || hba->pwr_info.lane_tx != 2))
ufs_intel_set_lanes(hba, 2);
- memcpy(dev_req_params, dev_max_params, sizeof(*dev_req_params));
break;
case POST_CHANGE:
if (ufshcd_is_hs_mode(dev_req_params)) {
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index 8563b6648976..51c2555bea73 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -302,11 +302,10 @@ struct ufs_pwr_mode_info {
* variant specific Uni-Pro initialization.
* @link_startup_notify: called before and after Link startup is carried out
* to allow variant specific Uni-Pro initialization.
+ * @negotiate_pwr_mode: called to negotiate power mode.
* @pwr_change_notify: called before and after a power mode change
* is carried out to allow vendor spesific capabilities
- * to be set. PRE_CHANGE can modify final_params based
- * on desired_pwr_mode, but POST_CHANGE must not alter
- * the final_params parameter
+ * to be set.
* @setup_xfer_req: called before any transfer request is issued
* to set some things
* @setup_task_mgmt: called before any task management request is issued
@@ -347,10 +346,12 @@ struct ufs_hba_variant_ops {
enum ufs_notify_change_status);
int (*link_startup_notify)(struct ufs_hba *,
enum ufs_notify_change_status);
- int (*pwr_change_notify)(struct ufs_hba *,
- enum ufs_notify_change_status status,
- const struct ufs_pa_layer_attr *desired_pwr_mode,
- struct ufs_pa_layer_attr *final_params);
+ int (*negotiate_pwr_mode)(struct ufs_hba *hba,
+ const struct ufs_pa_layer_attr *desired_pwr_mode,
+ struct ufs_pa_layer_attr *final_params);
+ int (*pwr_change_notify)(struct ufs_hba *hba,
+ enum ufs_notify_change_status status,
+ struct ufs_pa_layer_attr *final_params);
void (*setup_xfer_req)(struct ufs_hba *hba, int tag,
bool is_scsi_cmd);
void (*setup_task_mgmt)(struct ufs_hba *, int, u8);
@@ -1361,6 +1362,8 @@ extern int ufshcd_dme_set_attr(struct ufs_hba *hba, u32 attr_sel,
u8 attr_set, u32 mib_val, u8 peer);
extern int ufshcd_dme_get_attr(struct ufs_hba *hba, u32 attr_sel,
u32 *mib_val, u8 peer);
+extern int ufshcd_change_power_mode(struct ufs_hba *hba,
+ struct ufs_pa_layer_attr *pwr_mode);
extern int ufshcd_config_pwr_mode(struct ufs_hba *hba,
struct ufs_pa_layer_attr *desired_pwr_mode);
extern int ufshcd_uic_change_pwr_mode(struct ufs_hba *hba, u8 mode);
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 08/12] scsi: ufs: ufs-qcom: Fixup PAM-4 TX L0_L1_L2_L3 adaptation pattern length
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
2026-03-21 3:10 ` [PATCH v4 01/12] scsi: ufs: core: Introduce a new ufshcd vops negotiate_pwr_mode() Can Guo
@ 2026-03-21 3:10 ` Can Guo
2026-03-23 9:22 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 09/12] scsi: ufs: ufs-qcom: Implement vops tx_eqtr_notify() Can Guo
` (3 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Can Guo @ 2026-03-21 3:10 UTC (permalink / raw)
To: avri.altman, bvanassche, beanhuo, peter.wang, martin.petersen,
mani
Cc: linux-scsi, Can Guo, James E.J. Bottomley,
open list:ARM/QUALCOMM MAILING LIST, open list
If HS-G6 Power Mode change handshake is successful and outbound data Lanes
are expected to transmit ADAPT, M-TX Lanes shall be configured as
if (Adapt Type == REFRESH)
TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 = PA_PeerRxHsG6AdaptRefreshL0L1L2L3.
else if (Adapt Type == INITIAL)
TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 = PA_PeerRxHsG6AdaptInitialL0L1L2L3.
On some platforms, the ADAPT_L0_L1_L2_L3 duration on Host TX Lanes is only
a half of theoretical ADAPT_L0_L1_L2_L3 duration TADAPT_L0_L1_L2_L3 (in
PAM-4 UI) calculated from TX_HS_ADAPT_LENGTH_L0_L1_L2_L3.
For such platforms, the workaround is to double the ADAPT_L0_L1_L2_L3
duration by uplifting TX_HS_ADAPT_LENGTH_L0_L1_L2_L3. UniPro initializes
TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 during HS-G6 Power Mode change handshake,
it would be too late for SW to update TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 post
HS-G6 Power Mode change. Update PA_PeerRxHsG6AdaptRefreshL0L1L2L3 and
PA_PeerRxHsG6AdaptInitialL0L1L2L3 post Link Startup and before HS-G6
Power Mode change, so that the UniPro would use the updated value during
HS-G6 Power Mode change handshake.
Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
---
drivers/ufs/host/ufs-qcom.c | 178 ++++++++++++++++++++++++++++++++++++
1 file changed, 178 insertions(+)
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index cdc769886e82..b94fe93b830e 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -1069,10 +1069,188 @@ static void ufs_qcom_override_pa_tx_hsg1_sync_len(struct ufs_hba *hba)
dev_err(hba->dev, "Failed (%d) set PA_TX_HSG1_SYNC_LENGTH\n", err);
}
+/**
+ * ufs_qcom_double_t_adapt_l0l1l2l3 - Create a new adapt that doubles the
+ * adaptation duration TADAPT_L0_L1_L2_L3 derived from the old adapt.
+ *
+ * @old_adapt: Original ADAPT_L0_L1_L2_L3 capability
+ *
+ * ADAPT_length_L0_L1_L2_L3 formula from M-PHY spec:
+ * if (ADAPT_range_L0_L1_L2_L3 == COARSE) {
+ * ADAPT_length_L0_L1_L2_L3 = [0, 12]
+ * ADAPT_L0_L1_L2_L3 = 215 x 2^ADAPT_length_L0_L1_L2_L3
+ * } else if (ADAPT_range_L0_L1_L2_L3 == FINE) {
+ * ADAPT_length_L0_L1_L2_L3 = [0, 127]
+ * TADAPT_L0_L1_L2_L3 = 215 x (ADAPT_length_L0_L1_L2_L3 + 1)
+ * }
+ *
+ * To double the adaptation duration TADAPT_L0_L1_L2_L3:
+ * 1. If adapt range is COARSE (1'b1), new adapt = old adapt + 1.
+ * 2. If adapt range is FINE (1'b0):
+ * a) If old adapt length is < 64, (new adapt + 1) = 2 * (old adapt + 1).
+ * b) If old adapt length is >= 64, set new adapt to 0x88 using COARSE
+ * range, because new adapt get from equation in a) shall exceed 127.
+ *
+ * Examples:
+ * ADAPT_range_L0_L1_L2_L3 | ADAPT_length_L0_L1_L2_L3 | TADAPT_L0_L1_L2_L3 (PAM-4 UI)
+ * 0 3 131072
+ * 0 7 262144
+ * 0 63 2097152
+ * 0 64 2129920
+ * 0 127 4194304
+ * 1 8 8388608
+ * 1 9 16777216
+ * 1 10 33554432
+ * 1 11 67108864
+ * 1 12 134217728
+ *
+ * Return: new adapt.
+ */
+static u32 ufs_qcom_double_t_adapt_l0l1l2l3(u32 old_adapt)
+{
+ u32 adapt_length = old_adapt & ADAPT_LENGTH_MASK;
+ u32 new_adapt;
+
+ if (IS_ADAPT_RANGE_COARSE(old_adapt)) {
+ new_adapt = (adapt_length + 1) | ADAPT_RANGE_BIT;
+ } else {
+ if (adapt_length < 64)
+ new_adapt = (adapt_length << 1) + 1;
+ else
+ /*
+ * 0x88 is the very coarse Adapt value which is two
+ * times of the largest fine Adapt value (0x7F)
+ */
+ new_adapt = 0x88;
+ }
+
+ return new_adapt;
+}
+
+static void ufs_qcom_limit_max_gear(struct ufs_hba *hba,
+ enum ufs_hs_gear_tag gear)
+{
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ struct ufs_pa_layer_attr *pwr_info = &hba->max_pwr_info.info;
+ struct ufs_host_params *host_params = &host->host_params;
+
+ host_params->hs_tx_gear = gear;
+ host_params->hs_rx_gear = gear;
+ pwr_info->gear_tx = gear;
+ pwr_info->gear_rx = gear;
+
+ dev_warn(hba->dev, "Limited max gear of host and device to HS-G%d\n", gear);
+}
+
+static void ufs_qcom_fixup_tx_adapt_l0l1l2l3(struct ufs_hba *hba)
+{
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ struct ufs_pa_layer_attr *pwr_info = &hba->max_pwr_info.info;
+ struct ufs_host_params *host_params = &host->host_params;
+ u32 old_adapt, new_adapt, actual_adapt;
+ bool limit_speed = false;
+ int err;
+
+ if (host->hw_ver.major != 0x7 || host->hw_ver.minor > 0x1 ||
+ host_params->hs_tx_gear <= UFS_HS_G5 ||
+ pwr_info->gear_tx <= UFS_HS_G5)
+ return;
+
+ err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_PEERRXHSG6ADAPTINITIALL0L1L2L3), &old_adapt);
+ if (err)
+ goto out;
+
+ if (old_adapt > ADAPT_L0L1L2L3_LENGTH_MAX) {
+ dev_err(hba->dev, "PA_PeerRxHsG6AdaptInitialL0L1L2L3 value (0x%x) exceeds MAX\n",
+ old_adapt);
+ err = -ERANGE;
+ goto out;
+ }
+
+ new_adapt = ufs_qcom_double_t_adapt_l0l1l2l3(old_adapt);
+ dev_dbg(hba->dev, "Original PA_PeerRxHsG6AdaptInitialL0L1L2L3 = 0x%x, new value = 0x%x\n",
+ old_adapt, new_adapt);
+
+ /*
+ * 0x8C is the max possible value allowed by UniPro v3.0 spec, some HWs
+ * can accept 0x8D but some cannot.
+ */
+ if (new_adapt <= ADAPT_L0L1L2L3_LENGTH_MAX ||
+ (new_adapt == ADAPT_L0L1L2L3_LENGTH_MAX + 1 && host->hw_ver.minor == 0x1)) {
+ err = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PEERRXHSG6ADAPTINITIALL0L1L2L3),
+ new_adapt);
+ if (err)
+ goto out;
+
+ err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_PEERRXHSG6ADAPTINITIALL0L1L2L3),
+ &actual_adapt);
+ if (err)
+ goto out;
+
+ if (actual_adapt != new_adapt) {
+ limit_speed = true;
+ dev_warn(hba->dev, "PA_PeerRxHsG6AdaptInitialL0L1L2L3 0x%x, expect 0x%x\n",
+ actual_adapt, new_adapt);
+ }
+ } else {
+ limit_speed = true;
+ dev_warn(hba->dev, "New PA_PeerRxHsG6AdaptInitialL0L1L2L3 (0x%x) is too large!\n",
+ new_adapt);
+ }
+
+ err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_PEERRXHSG6ADAPTREFRESHL0L1L2L3), &old_adapt);
+ if (err)
+ goto out;
+
+ if (old_adapt > ADAPT_L0L1L2L3_LENGTH_MAX) {
+ dev_err(hba->dev, "PA_PeerRxHsG6AdaptRefreshL0L1L2L3 value (0x%x) exceeds MAX\n",
+ old_adapt);
+ err = -ERANGE;
+ goto out;
+ }
+
+ new_adapt = ufs_qcom_double_t_adapt_l0l1l2l3(old_adapt);
+ dev_dbg(hba->dev, "Original PA_PeerRxHsG6AdaptRefreshL0L1L2L3 = 0x%x, new value = 0x%x\n",
+ old_adapt, new_adapt);
+
+ /*
+ * 0x8C is the max possible value allowed by UniPro v3.0 spec, some HWs
+ * can accept 0x8D but some cannot.
+ */
+ if (new_adapt <= ADAPT_L0L1L2L3_LENGTH_MAX ||
+ (new_adapt == ADAPT_L0L1L2L3_LENGTH_MAX + 1 && host->hw_ver.minor == 0x1)) {
+ err = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_PEERRXHSG6ADAPTREFRESHL0L1L2L3),
+ new_adapt);
+ if (err)
+ goto out;
+
+ err = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_PEERRXHSG6ADAPTREFRESHL0L1L2L3),
+ &actual_adapt);
+ if (err)
+ goto out;
+
+ if (actual_adapt != new_adapt) {
+ limit_speed = true;
+ dev_warn(hba->dev, "PA_PeerRxHsG6AdaptRefreshL0L1L2L3 0x%x, expect 0x%x\n",
+ new_adapt, actual_adapt);
+ }
+ } else {
+ limit_speed = true;
+ dev_warn(hba->dev, "New PA_PeerRxHsG6AdaptRefreshL0L1L2L3 (0x%x) is too large!\n",
+ new_adapt);
+ }
+
+out:
+ if (limit_speed || err)
+ ufs_qcom_limit_max_gear(hba, UFS_HS_G5);
+}
+
static int ufs_qcom_apply_dev_quirks(struct ufs_hba *hba)
{
int err = 0;
+ ufs_qcom_fixup_tx_adapt_l0l1l2l3(hba);
+
if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME)
err = ufs_qcom_quirk_host_pa_saveconfigtime(hba);
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 09/12] scsi: ufs: ufs-qcom: Implement vops tx_eqtr_notify()
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
2026-03-21 3:10 ` [PATCH v4 01/12] scsi: ufs: core: Introduce a new ufshcd vops negotiate_pwr_mode() Can Guo
2026-03-21 3:10 ` [PATCH v4 08/12] scsi: ufs: ufs-qcom: Fixup PAM-4 TX L0_L1_L2_L3 adaptation pattern length Can Guo
@ 2026-03-21 3:10 ` Can Guo
2026-03-23 9:23 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 10/12] scsi: ufs: ufs-qcom: Implement vops get_rx_fom() Can Guo
` (2 subsequent siblings)
5 siblings, 1 reply; 12+ messages in thread
From: Can Guo @ 2026-03-21 3:10 UTC (permalink / raw)
To: avri.altman, bvanassche, beanhuo, peter.wang, martin.petersen,
mani
Cc: linux-scsi, Can Guo, James E.J. Bottomley,
open list:ARM/QUALCOMM MAILING LIST, open list
On some platforms, HW does not support triggering TX EQTR from the most
reliable High-Speed (HS) Gear (HS Gear1), but only allows to trigger TX
EQTR for the target HS Gear from the same HS Gear. To work around the HW
limitation, implement vops tx_eqtr_notify() to change Power Mode to the
target TX EQTR HS Gear prior to TX EQTR procedure and change Power Mode
back to HS Gear1 (the most reliable gear) post TX EQTR procedure.
Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
---
drivers/ufs/host/ufs-qcom.c | 41 +++++++++++++++++++++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index b94fe93b830e..eac5e95e740b 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -2505,6 +2505,46 @@ static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
return min_t(u32, gear, hba->max_pwr_info.info.gear_rx);
}
+static int ufs_qcom_tx_eqtr_notify(struct ufs_hba *hba,
+ enum ufs_notify_change_status status,
+ struct ufs_pa_layer_attr *pwr_mode)
+{
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ struct ufs_pa_layer_attr pwr_mode_hs_g1 = {
+ .gear_rx = UFS_HS_G1,
+ .gear_tx = UFS_HS_G1,
+ .lane_rx = pwr_mode->lane_rx,
+ .lane_tx = pwr_mode->lane_tx,
+ .pwr_rx = FAST_MODE,
+ .pwr_tx = FAST_MODE,
+ .hs_rate = pwr_mode->hs_rate,
+ };
+ u32 gear = pwr_mode->gear_tx;
+ u32 rate = pwr_mode->hs_rate;
+ int ret;
+
+ if (host->hw_ver.major != 0x7 || host->hw_ver.minor > 0x1)
+ return 0;
+
+ if (status == PRE_CHANGE) {
+ /* PMC to target HS Gear. */
+ ret = ufshcd_change_power_mode(hba, pwr_mode,
+ UFSHCD_PMC_POLICY_DONT_FORCE);
+ if (ret)
+ dev_err(hba->dev, "%s: Failed to PMC to target HS-G%u, Rate-%s: %d\n",
+ __func__, gear, ufs_hs_rate_to_str(rate), ret);
+ } else {
+ /* PMC back to HS-G1. */
+ ret = ufshcd_change_power_mode(hba, &pwr_mode_hs_g1,
+ UFSHCD_PMC_POLICY_DONT_FORCE);
+ if (ret)
+ dev_err(hba->dev, "%s: Failed to PMC to HS-G1, Rate-%s: %d\n",
+ __func__, ufs_hs_rate_to_str(rate), ret);
+ }
+
+ return ret;
+}
+
/*
* struct ufs_hba_qcom_vops - UFS QCOM specific variant operations
*
@@ -2535,6 +2575,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.get_outstanding_cqs = ufs_qcom_get_outstanding_cqs,
.config_esi = ufs_qcom_config_esi,
.freq_to_gear_speed = ufs_qcom_freq_to_gear_speed,
+ .tx_eqtr_notify = ufs_qcom_tx_eqtr_notify,
};
static const struct ufs_hba_variant_ops ufs_hba_qcom_sa8255p_vops = {
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 10/12] scsi: ufs: ufs-qcom: Implement vops get_rx_fom()
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
` (2 preceding siblings ...)
2026-03-21 3:10 ` [PATCH v4 09/12] scsi: ufs: ufs-qcom: Implement vops tx_eqtr_notify() Can Guo
@ 2026-03-21 3:10 ` Can Guo
2026-03-23 9:24 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 11/12] scsi: ufs: ufs-qcom: Implement vops apply_tx_eqtr_settings() Can Guo
2026-03-21 3:10 ` [PATCH v4 12/12] scsi: ufs: ufs-qcom: Enable TX Equalization Can Guo
5 siblings, 1 reply; 12+ messages in thread
From: Can Guo @ 2026-03-21 3:10 UTC (permalink / raw)
To: avri.altman, bvanassche, beanhuo, peter.wang, martin.petersen,
mani
Cc: linux-scsi, Can Guo, Alim Akhtar, James E.J. Bottomley, open list,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...
On some platforms, host's M-PHY RX_FOM Attribute always reads 0, meaning
SW cannot rely on Figure of Merit (FOM) to identify the optimal TX
Equalization settings for device's TX Lanes. Implement the vops
ufs_qcom_get_rx_fom() such that SW can utilize the UFS Eye Opening Monitor
(EOM) to evaluate the TX Equalization settings for device's TX Lanes.
Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
---
drivers/ufs/core/ufs-txeq.c | 6 +-
drivers/ufs/host/ufs-qcom.c | 312 ++++++++++++++++++++++++++++++++++++
drivers/ufs/host/ufs-qcom.h | 40 +++++
include/ufs/ufshcd.h | 3 +
include/ufs/unipro.h | 25 +++
5 files changed, 383 insertions(+), 3 deletions(-)
diff --git a/drivers/ufs/core/ufs-txeq.c b/drivers/ufs/core/ufs-txeq.c
index dc4aa5c06a83..7c0df28b1513 100644
--- a/drivers/ufs/core/ufs-txeq.c
+++ b/drivers/ufs/core/ufs-txeq.c
@@ -232,9 +232,8 @@ ufshcd_compose_tx_eq_setting(struct ufshcd_tx_eq_settings *settings,
*
* Returns 0 on success, negative error code otherwise
*/
-static int ufshcd_apply_tx_eq_settings(struct ufs_hba *hba,
- struct ufshcd_tx_eq_params *params,
- u32 gear)
+int ufshcd_apply_tx_eq_settings(struct ufs_hba *hba,
+ struct ufshcd_tx_eq_params *params, u32 gear)
{
struct ufs_pa_layer_attr *pwr_info = &hba->max_pwr_info.info;
u32 setting;
@@ -263,6 +262,7 @@ static int ufshcd_apply_tx_eq_settings(struct ufs_hba *hba,
return 0;
}
+EXPORT_SYMBOL_GPL(ufshcd_apply_tx_eq_settings);
/**
* ufshcd_evaluate_tx_eqtr_fom - Evaluate TX EQTR FOM results
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index eac5e95e740b..a0314cb55c7f 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -2505,6 +2505,317 @@ static u32 ufs_qcom_freq_to_gear_speed(struct ufs_hba *hba, unsigned long freq)
return min_t(u32, gear, hba->max_pwr_info.info.gear_rx);
}
+static int ufs_qcom_host_eom_config(struct ufs_hba *hba, int lane,
+ const struct ufs_eom_coord *eom_coord,
+ u32 target_test_count)
+{
+ enum ufs_eom_eye_mask eye_mask = eom_coord->eye_mask;
+ int v_step = eom_coord->v_step;
+ int t_step = eom_coord->t_step;
+ u32 volt_step, timing_step;
+ int ret;
+
+ if (abs(v_step) > UFS_QCOM_EOM_VOLTAGE_STEPS_MAX) {
+ dev_err(hba->dev, "Invalid EOM Voltage Step: %d\n", v_step);
+ return -ERANGE;
+ }
+
+ if (abs(t_step) > UFS_QCOM_EOM_TIMING_STEPS_MAX) {
+ dev_err(hba->dev, "Invalid EOM Timing Step: %d\n", t_step);
+ return -ERANGE;
+ }
+
+ if (v_step < 0)
+ volt_step = RX_EYEMON_NEGATIVE_STEP_BIT | (u32)(-v_step);
+ else
+ volt_step = (u32)v_step;
+
+ if (t_step < 0)
+ timing_step = RX_EYEMON_NEGATIVE_STEP_BIT | (u32)(-t_step);
+ else
+ timing_step = (u32)t_step;
+
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(RX_EYEMON_ENABLE,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ BIT(eye_mask) | RX_EYEMON_EXTENDED_VRANGE_BIT);
+ if (ret) {
+ dev_err(hba->dev, "Failed to enable Host EOM on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(RX_EYEMON_TIMING_STEPS,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ timing_step);
+ if (ret) {
+ dev_err(hba->dev, "Failed to set Host EOM timing step on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(RX_EYEMON_VOLTAGE_STEPS,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ volt_step);
+ if (ret) {
+ dev_err(hba->dev, "Failed to set Host EOM voltage step on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(RX_EYEMON_TARGET_TEST_COUNT,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ target_test_count);
+ if (ret)
+ dev_err(hba->dev, "Failed to set Host EOM target test count on Lane %d: %d\n",
+ lane, ret);
+
+ return ret;
+}
+
+static int ufs_qcom_host_eom_may_stop(struct ufs_hba *hba, int lane,
+ u32 target_test_count, u32 *err_count)
+{
+ u32 start, tested_count, error_count;
+ int ret;
+
+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB_SEL(RX_EYEMON_START,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ &start);
+ if (ret) {
+ dev_err(hba->dev, "Failed to get Host EOM start status on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+
+ if (start & 0x1)
+ return -EAGAIN;
+
+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB_SEL(RX_EYEMON_TESTED_COUNT,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ &tested_count);
+ if (ret) {
+ dev_err(hba->dev, "Failed to get Host EOM tested count on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+
+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB_SEL(RX_EYEMON_ERROR_COUNT,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ &error_count);
+ if (ret) {
+ dev_err(hba->dev, "Failed to get Host EOM error count on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+
+ /* EOM can stop */
+ if ((tested_count >= target_test_count - 3) || error_count > 0) {
+ *err_count = error_count;
+
+ /* Disable EOM */
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB_SEL(RX_EYEMON_ENABLE,
+ UIC_ARG_MPHY_RX_GEN_SEL_INDEX(lane)),
+ 0x0);
+ if (ret) {
+ dev_err(hba->dev, "Failed to disable Host EOM on Lane %d: %d\n",
+ lane, ret);
+ return ret;
+ }
+ } else {
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static int ufs_qcom_host_eom_scan(struct ufs_hba *hba, int num_lanes,
+ const struct ufs_eom_coord *eom_coord,
+ u32 target_test_count, u32 *err_count)
+{
+ bool eom_stopped[PA_MAXDATALANES] = { 0 };
+ int lane, ret;
+ u32 setting;
+
+ if (!err_count || !eom_coord)
+ return -EINVAL;
+
+ if (target_test_count < UFS_QCOM_EOM_TARGET_TEST_COUNT_MIN) {
+ dev_err(hba->dev, "Target test count (%u) too small for Host EOM\n",
+ target_test_count);
+ return -ERANGE;
+ }
+
+ for (lane = 0; lane < num_lanes; lane++) {
+ ret = ufs_qcom_host_eom_config(hba, lane, eom_coord,
+ target_test_count);
+ if (ret) {
+ dev_err(hba->dev, "Failed to config Host RX EOM: %d\n", ret);
+ return ret;
+ }
+ }
+
+ /*
+ * Trigger a PACP_PWR_req to kick start EOM, but not to really change
+ * the Power Mode.
+ */
+ ret = ufshcd_uic_change_pwr_mode(hba, FAST_MODE << 4 | FAST_MODE);
+ if (ret) {
+ dev_err(hba->dev, "Failed to change power mode to kick start Host EOM: %d\n",
+ ret);
+ return ret;
+ }
+
+more_burst:
+ /* Create burst on Host RX Lane. */
+ ufshcd_dme_peer_get(hba, UIC_ARG_MIB(PA_LOCALVERINFO), &setting);
+
+ for (lane = 0; lane < num_lanes; lane++) {
+ if (eom_stopped[lane])
+ continue;
+
+ ret = ufs_qcom_host_eom_may_stop(hba, lane, target_test_count,
+ &err_count[lane]);
+ if (!ret) {
+ eom_stopped[lane] = true;
+ } else if (ret == -EAGAIN) {
+ /* Need more burst to excercise EOM */
+ goto more_burst;
+ } else {
+ dev_err(hba->dev, "Failed to stop Host EOM: %d\n", ret);
+ return ret;
+ }
+
+ dev_dbg(hba->dev, "Host RX Lane %d EOM, v_step %d, t_step %d, error count %u\n",
+ lane, eom_coord->v_step, eom_coord->t_step,
+ err_count[lane]);
+ }
+
+ return 0;
+}
+
+static int ufs_qcom_host_sw_rx_fom(struct ufs_hba *hba, int num_lanes, u32 *fom)
+{
+ const struct ufs_eom_coord *eom_coord = sw_rx_fom_eom_coords_g6;
+ u32 eom_err_count[PA_MAXDATALANES] = { 0 };
+ u32 curr_ahit;
+ int lane, i, ret;
+
+ if (!fom)
+ return -EINVAL;
+
+ /* Stop the auto hibernate idle timer */
+ curr_ahit = ufshcd_readl(hba, REG_AUTO_HIBERNATE_IDLE_TIMER);
+ if (curr_ahit)
+ ufshcd_writel(hba, 0, REG_AUTO_HIBERNATE_IDLE_TIMER);
+
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TXHSADAPTTYPE), PA_NO_ADAPT);
+ if (ret) {
+ dev_err(hba->dev, "Failed to select NO_ADAPT before starting Host EOM: %d\n", ret);
+ goto out;
+ }
+
+ for (i = 0; i < SW_RX_FOM_EOM_COORDS; i++, eom_coord++) {
+ ret = ufs_qcom_host_eom_scan(hba, num_lanes, eom_coord,
+ UFS_QCOM_EOM_TARGET_TEST_COUNT_G6,
+ eom_err_count);
+ if (ret) {
+ dev_err(hba->dev, "Failed to run Host EOM scan: %d\n", ret);
+ break;
+ }
+
+ for (lane = 0; lane < num_lanes; lane++) {
+ /* Bad coordinates have no weights */
+ if (eom_err_count[lane])
+ continue;
+ fom[lane] += SW_RX_FOM_EOM_COORDS_WEIGHT;
+ }
+ }
+
+out:
+ /* Restore the auto hibernate idle timer */
+ if (curr_ahit)
+ ufshcd_writel(hba, curr_ahit, REG_AUTO_HIBERNATE_IDLE_TIMER);
+
+ return ret;
+}
+
+static int ufs_qcom_get_rx_fom(struct ufs_hba *hba,
+ struct ufs_pa_layer_attr *pwr_mode,
+ struct tx_eqtr_iter *h_iter,
+ struct tx_eqtr_iter *d_iter)
+{
+ struct ufshcd_tx_eq_params *params __free(kfree) =
+ kzalloc(sizeof(*params), GFP_KERNEL);
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ struct ufs_pa_layer_attr old_pwr_info;
+ u32 fom[PA_MAXDATALANES] = { 0 };
+ u32 gear = pwr_mode->gear_tx;
+ u32 rate = pwr_mode->hs_rate;
+ int lane, ret;
+
+ if (host->hw_ver.major != 0x7 || host->hw_ver.minor > 0x1 ||
+ gear <= UFS_HS_G5 || !d_iter || !d_iter->is_updated)
+ return 0;
+
+ if (gear < UFS_HS_G1 || gear > UFS_HS_GEAR_MAX)
+ return -ERANGE;
+
+ if (!params)
+ return -ENOMEM;
+
+ memcpy(&old_pwr_info, &hba->pwr_info, sizeof(struct ufs_pa_layer_attr));
+
+ memcpy(params, &hba->tx_eq_params[gear - 1], sizeof(struct ufshcd_tx_eq_params));
+ for (lane = 0; lane < pwr_mode->lane_rx; lane++) {
+ params->device[lane].preshoot = d_iter->preshoot;
+ params->device[lane].deemphasis = d_iter->deemphasis;
+ }
+
+ /* Use TX EQTR settings as Device's TX Equalization settings. */
+ ret = ufshcd_apply_tx_eq_settings(hba, params, gear);
+ if (ret) {
+ dev_err(hba->dev, "%s: Failed to apply TX EQ settings for HS-G%u: %d\n",
+ __func__, gear, ret);
+ return ret;
+ }
+
+ /* Force PMC to target HS Gear to use new TX Equalization settings. */
+ ret = ufshcd_change_power_mode(hba, pwr_mode, UFSHCD_PMC_POLICY_FORCE);
+ if (ret) {
+ dev_err(hba->dev, "%s: Failed to change power mode to HS-G%u, Rate-%s: %d\n",
+ __func__, gear, ufs_hs_rate_to_str(rate), ret);
+ return ret;
+ }
+
+ ret = ufs_qcom_host_sw_rx_fom(hba, pwr_mode->lane_rx, fom);
+ if (ret) {
+ dev_err(hba->dev, "Failed to get SW FOM of TX (PreShoot: %u, DeEmphasis: %u): %d\n",
+ d_iter->preshoot, d_iter->deemphasis, ret);
+ return ret;
+ }
+
+ /* Restore Device's TX Equalization settings. */
+ ret = ufshcd_apply_tx_eq_settings(hba, &hba->tx_eq_params[gear - 1], gear);
+ if (ret) {
+ dev_err(hba->dev, "%s: Failed to apply TX EQ settings for HS-G%u: %d\n",
+ __func__, gear, ret);
+ return ret;
+ }
+
+ /* Restore Power Mode. */
+ ret = ufshcd_change_power_mode(hba, &old_pwr_info, UFSHCD_PMC_POLICY_FORCE);
+ if (ret) {
+ dev_err(hba->dev, "%s: Failed to retore power mode to HS-G%u: %d\n",
+ __func__, old_pwr_info.gear_tx, ret);
+ return ret;
+ }
+
+ for (lane = 0; lane < pwr_mode->lane_rx; lane++)
+ d_iter->fom[lane] = fom[lane];
+
+ return 0;
+}
+
static int ufs_qcom_tx_eqtr_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
struct ufs_pa_layer_attr *pwr_mode)
@@ -2575,6 +2886,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.get_outstanding_cqs = ufs_qcom_get_outstanding_cqs,
.config_esi = ufs_qcom_config_esi,
.freq_to_gear_speed = ufs_qcom_freq_to_gear_speed,
+ .get_rx_fom = ufs_qcom_get_rx_fom,
.tx_eqtr_notify = ufs_qcom_tx_eqtr_notify,
};
diff --git a/drivers/ufs/host/ufs-qcom.h b/drivers/ufs/host/ufs-qcom.h
index 1111ab34da01..7183d6b2c8bb 100644
--- a/drivers/ufs/host/ufs-qcom.h
+++ b/drivers/ufs/host/ufs-qcom.h
@@ -33,6 +33,46 @@
#define DL_VS_CLK_CFG_MASK GENMASK(9, 0)
#define DME_VS_CORE_CLK_CTRL_DME_HW_CGC_EN BIT(9)
+#define UFS_QCOM_EOM_VOLTAGE_STEPS_MAX 127
+#define UFS_QCOM_EOM_TIMING_STEPS_MAX 63
+#define UFS_QCOM_EOM_TARGET_TEST_COUNT_MIN 8
+#define UFS_QCOM_EOM_TARGET_TEST_COUNT_G6 0x3F
+
+#define SW_RX_FOM_EOM_COORDS 23
+#define SW_RX_FOM_EOM_COORDS_WEIGHT (127 / SW_RX_FOM_EOM_COORDS)
+
+struct ufs_eom_coord {
+ int t_step;
+ int v_step;
+ u8 eye_mask;
+};
+
+static const struct ufs_eom_coord sw_rx_fom_eom_coords_g6[SW_RX_FOM_EOM_COORDS] = {
+ [0] = { -2, -15, UFS_EOM_EYE_MASK_M },
+ [1] = { 0, -15, UFS_EOM_EYE_MASK_M },
+ [2] = { 2, -15, UFS_EOM_EYE_MASK_M },
+ [3] = { -4, -10, UFS_EOM_EYE_MASK_M },
+ [4] = { -2, -10, UFS_EOM_EYE_MASK_M },
+ [5] = { 0, -10, UFS_EOM_EYE_MASK_M },
+ [6] = { 2, -10, UFS_EOM_EYE_MASK_M },
+ [7] = { 4, -10, UFS_EOM_EYE_MASK_M },
+ [8] = { -6, 0, UFS_EOM_EYE_MASK_M },
+ [9] = { -4, 0, UFS_EOM_EYE_MASK_M },
+ [10] = { -2, 0, UFS_EOM_EYE_MASK_M },
+ [11] = { 0, 0, UFS_EOM_EYE_MASK_M },
+ [12] = { 2, 0, UFS_EOM_EYE_MASK_M },
+ [13] = { 4, 0, UFS_EOM_EYE_MASK_M },
+ [14] = { 6, 0, UFS_EOM_EYE_MASK_M },
+ [15] = { -4, 10, UFS_EOM_EYE_MASK_M },
+ [16] = { -2, 10, UFS_EOM_EYE_MASK_M },
+ [17] = { 0, 10, UFS_EOM_EYE_MASK_M },
+ [18] = { 2, 10, UFS_EOM_EYE_MASK_M },
+ [19] = { 4, 10, UFS_EOM_EYE_MASK_M },
+ [20] = { -2, 15, UFS_EOM_EYE_MASK_M },
+ [21] = { 0, 15, UFS_EOM_EYE_MASK_M },
+ [22] = { 2, 15, UFS_EOM_EYE_MASK_M },
+};
+
/* Qualcomm MCQ Configuration */
#define UFS_QCOM_MCQCAP_QCFGPTR 224 /* 0xE0 in hex */
#define UFS_QCOM_MCQ_CONFIG_OFFSET (UFS_QCOM_MCQCAP_QCFGPTR * 0x200) /* 0x1C000 */
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index bc9e48e89db4..be15b6247303 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -1515,6 +1515,9 @@ extern int ufshcd_config_pwr_mode(struct ufs_hba *hba,
struct ufs_pa_layer_attr *desired_pwr_mode,
enum ufshcd_pmc_policy pmc_policy);
extern int ufshcd_uic_change_pwr_mode(struct ufs_hba *hba, u8 mode);
+extern int ufshcd_apply_tx_eq_settings(struct ufs_hba *hba,
+ struct ufshcd_tx_eq_params *params,
+ u32 gear);
/* UIC command interfaces for DME primitives */
#define DME_LOCAL 0
diff --git a/include/ufs/unipro.h b/include/ufs/unipro.h
index 4aa592130b4e..f849a2a101ae 100644
--- a/include/ufs/unipro.h
+++ b/include/ufs/unipro.h
@@ -32,6 +32,8 @@
#define TX_LCC_SEQUENCER 0x0032
#define TX_MIN_ACTIVATETIME 0x0033
#define TX_PWM_G6_G7_SYNC_LENGTH 0x0034
+#define TX_HS_DEEMPHASIS_SETTING 0x0037
+#define TX_HS_PRESHOOT_SETTING 0x003B
#define TX_REFCLKFREQ 0x00EB
#define TX_CFGCLKFREQVAL 0x00EC
#define CFGEXTRATTR 0x00F0
@@ -76,10 +78,27 @@
#define RX_REFCLKFREQ 0x00EB
#define RX_CFGCLKFREQVAL 0x00EC
#define CFGWIDEINLN 0x00F0
+#define RX_EYEMON_CAP 0x00F1
+#define RX_EYEMON_TIMING_MAX_STEPS_CAP 0x00F2
+#define RX_EYEMON_TIMING_MAX_OFFSET_CAP 0x00F3
+#define RX_EYEMON_VOLTAGE_MAX_STEPS_CAP 0x00F4
+#define RX_EYEMON_VOLTAGE_MAX_OFFSET_CAP 0x00F5
+#define RX_EYEMON_ENABLE 0x00F6
+#define RX_EYEMON_TIMING_STEPS 0x00F7
+#define RX_EYEMON_VOLTAGE_STEPS 0x00F8
+#define RX_EYEMON_TARGET_TEST_COUNT 0x00F9
+#define RX_EYEMON_TESTED_COUNT 0x00FA
+#define RX_EYEMON_ERROR_COUNT 0x00FB
+#define RX_EYEMON_START 0x00FC
+#define RX_EYEMON_EXTENDED_ERROR_COUNT 0x00FD
+
#define ENARXDIRECTCFG4 0x00F2
#define ENARXDIRECTCFG3 0x00F3
#define ENARXDIRECTCFG2 0x00F4
+#define RX_EYEMON_NEGATIVE_STEP_BIT BIT(6)
+#define RX_EYEMON_EXTENDED_VRANGE_BIT BIT(6)
+
#define is_mphy_tx_attr(attr) (attr < RX_MODE)
#define RX_ADV_FINE_GRAN_STEP(x) ((((x) & 0x3) << 1) | 0x1)
#define SYNC_LEN_FINE(x) ((x) & 0x3F)
@@ -297,6 +316,12 @@ enum ufs_tx_hs_deemphasis {
UFS_TX_HS_DEEMPHASIS_DB_7P6,
};
+enum ufs_eom_eye_mask {
+ UFS_EOM_EYE_MASK_M,
+ UFS_EOM_EYE_MASK_L,
+ UFS_EOM_EYE_MASK_U,
+};
+
#define DL_FC0ProtectionTimeOutVal_Default 8191
#define DL_TC0ReplayTimeOutVal_Default 65535
#define DL_AFC0ReqTimeOutVal_Default 32767
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 11/12] scsi: ufs: ufs-qcom: Implement vops apply_tx_eqtr_settings()
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
` (3 preceding siblings ...)
2026-03-21 3:10 ` [PATCH v4 10/12] scsi: ufs: ufs-qcom: Implement vops get_rx_fom() Can Guo
@ 2026-03-21 3:10 ` Can Guo
2026-03-23 9:24 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 12/12] scsi: ufs: ufs-qcom: Enable TX Equalization Can Guo
5 siblings, 1 reply; 12+ messages in thread
From: Can Guo @ 2026-03-21 3:10 UTC (permalink / raw)
To: avri.altman, bvanassche, beanhuo, peter.wang, martin.petersen,
mani
Cc: linux-scsi, Can Guo, James E.J. Bottomley,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
open list
On some platforms, when Host Software triggers TX Equalization Training,
HW does not take TX EQTR settings programmed in PA_TxEQTRSetting, instead
HW takes TX EQTR settings from PA_TxEQG1Setting. Implement vops
apply_tx_eqtr_setting() to work around it by programming TX EQTR settings
to PA_TxEQG1Setting during TX EQTR procedure.
Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
---
drivers/ufs/host/ufs-qcom.c | 31 +++++++++++++++++++++++++++++++
drivers/ufs/host/ufs-qcom.h | 2 ++
2 files changed, 33 insertions(+)
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index a0314cb55c7f..9abdeeee81f7 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -2816,6 +2816,26 @@ static int ufs_qcom_get_rx_fom(struct ufs_hba *hba,
return 0;
}
+static int ufs_qcom_apply_tx_eqtr_settings(struct ufs_hba *hba,
+ struct ufs_pa_layer_attr *pwr_mode,
+ struct tx_eqtr_iter *h_iter,
+ struct tx_eqtr_iter *d_iter)
+{
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ u32 setting = 0;
+ int lane;
+
+ if (host->hw_ver.major != 0x7 || host->hw_ver.minor > 0x1)
+ return 0;
+
+ for (lane = 0; lane < pwr_mode->lane_tx; lane++) {
+ setting |= TX_HS_PRESHOOT_BITS(lane, h_iter->preshoot);
+ setting |= TX_HS_DEEMPHASIS_BITS(lane, h_iter->deemphasis);
+ }
+
+ return ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TXEQG1SETTING), setting);
+}
+
static int ufs_qcom_tx_eqtr_notify(struct ufs_hba *hba,
enum ufs_notify_change_status status,
struct ufs_pa_layer_attr *pwr_mode)
@@ -2838,6 +2858,11 @@ static int ufs_qcom_tx_eqtr_notify(struct ufs_hba *hba,
return 0;
if (status == PRE_CHANGE) {
+ ret = ufshcd_dme_get(hba, UIC_ARG_MIB(PA_TXEQG1SETTING),
+ &host->saved_tx_eq_g1_setting);
+ if (ret)
+ return ret;
+
/* PMC to target HS Gear. */
ret = ufshcd_change_power_mode(hba, pwr_mode,
UFSHCD_PMC_POLICY_DONT_FORCE);
@@ -2845,6 +2870,11 @@ static int ufs_qcom_tx_eqtr_notify(struct ufs_hba *hba,
dev_err(hba->dev, "%s: Failed to PMC to target HS-G%u, Rate-%s: %d\n",
__func__, gear, ufs_hs_rate_to_str(rate), ret);
} else {
+ ret = ufshcd_dme_set(hba, UIC_ARG_MIB(PA_TXEQG1SETTING),
+ host->saved_tx_eq_g1_setting);
+ if (ret)
+ return ret;
+
/* PMC back to HS-G1. */
ret = ufshcd_change_power_mode(hba, &pwr_mode_hs_g1,
UFSHCD_PMC_POLICY_DONT_FORCE);
@@ -2887,6 +2917,7 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
.config_esi = ufs_qcom_config_esi,
.freq_to_gear_speed = ufs_qcom_freq_to_gear_speed,
.get_rx_fom = ufs_qcom_get_rx_fom,
+ .apply_tx_eqtr_settings = ufs_qcom_apply_tx_eqtr_settings,
.tx_eqtr_notify = ufs_qcom_tx_eqtr_notify,
};
diff --git a/drivers/ufs/host/ufs-qcom.h b/drivers/ufs/host/ufs-qcom.h
index 7183d6b2c8bb..5d083331a7f4 100644
--- a/drivers/ufs/host/ufs-qcom.h
+++ b/drivers/ufs/host/ufs-qcom.h
@@ -348,6 +348,8 @@ struct ufs_qcom_host {
u32 phy_gear;
bool esi_enabled;
+
+ u32 saved_tx_eq_g1_setting;
};
struct ufs_qcom_drvdata {
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 12/12] scsi: ufs: ufs-qcom: Enable TX Equalization
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
` (4 preceding siblings ...)
2026-03-21 3:10 ` [PATCH v4 11/12] scsi: ufs: ufs-qcom: Implement vops apply_tx_eqtr_settings() Can Guo
@ 2026-03-21 3:10 ` Can Guo
2026-03-23 9:25 ` Bean Huo
5 siblings, 1 reply; 12+ messages in thread
From: Can Guo @ 2026-03-21 3:10 UTC (permalink / raw)
To: avri.altman, bvanassche, beanhuo, peter.wang, martin.petersen,
mani
Cc: linux-scsi, Can Guo, James E.J. Bottomley,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
open list
Enable TX Equalization for hosts with HW version 0x7 and onwards.
Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
---
drivers/ufs/host/ufs-qcom.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
index 9abdeeee81f7..5a58ffef3d27 100644
--- a/drivers/ufs/host/ufs-qcom.c
+++ b/drivers/ufs/host/ufs-qcom.c
@@ -1384,6 +1384,8 @@ static void ufs_qcom_set_host_caps(struct ufs_hba *hba)
static void ufs_qcom_set_caps(struct ufs_hba *hba)
{
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
hba->caps |= UFSHCD_CAP_CLK_GATING | UFSHCD_CAP_HIBERN8_WITH_CLK_GATING;
hba->caps |= UFSHCD_CAP_CLK_SCALING | UFSHCD_CAP_WB_WITH_CLK_SCALING;
hba->caps |= UFSHCD_CAP_AUTO_BKOPS_SUSPEND;
@@ -1391,6 +1393,9 @@ static void ufs_qcom_set_caps(struct ufs_hba *hba)
hba->caps |= UFSHCD_CAP_AGGR_POWER_COLLAPSE;
hba->caps |= UFSHCD_CAP_RPM_AUTOSUSPEND;
+ if (host->hw_ver.major >= 0x7)
+ hba->caps |= UFSHCD_CAP_TX_EQUALIZATION;
+
ufs_qcom_set_host_caps(hba);
}
--
2.34.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v4 01/12] scsi: ufs: core: Introduce a new ufshcd vops negotiate_pwr_mode()
2026-03-21 3:10 ` [PATCH v4 01/12] scsi: ufs: core: Introduce a new ufshcd vops negotiate_pwr_mode() Can Guo
@ 2026-03-23 9:10 ` Bean Huo
0 siblings, 0 replies; 12+ messages in thread
From: Bean Huo @ 2026-03-23 9:10 UTC (permalink / raw)
To: Can Guo, avri.altman, bvanassche, beanhuo, peter.wang,
martin.petersen, mani
Cc: linux-scsi, Alim Akhtar, James E.J. Bottomley,
Sai Krishna Potthuri, Ajay Neeli, Peter Griffin,
Krzysztof Kozlowski, Chaotian Jing, Stanley Jhu, Orson Zhai,
Baolin Wang, Chunyan Zhang, Matthias Brugger,
AngeloGioacchino Del Regno, Bao D. Nguyen, Adrian Hunter,
Archana Patni, open list,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
moderated list:ARM/SAMSUNG S3C, S5P AND EXYNOS ARM ARCHITECTURES,
moderated list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...
On Fri, 2026-03-20 at 20:10 -0700, Can Guo wrote:
> Most vendor specific implemenations of vops pwr_change_notify(PRE_CHANGE)
> are fulfilling two things at once:
> - Vendor specific target power mode negotiation
> - Vendor specific power mode change preparation
>
> When TX Equalization is added into consideration, before power mode change
> to a target power mode, TX Equalization Training (EQTR) needs be done for
> that target power mode. In addition, UFSHCI spec requires to start TX EQTR
> from HS-G1 (the most reliable High Speed Gear).
>
> Adding TX EQTR before pwr_change_notify(PRE_CHANGE) is not applicable
> because we don't know the negotiated power mode yet.
>
> Adding TX EQTR post pwr_change_notify(PRE_CHANGE) is inappropriate
> because pwr_change_notify(PRE_CHANGE) has finished preparation for a power
> mode change to negotiated power mode, yet we are changing power mode to
> HS-G1 for TX EQTR.
>
> Add a new vops negotiate_pwr_mode() so that vendor specific power mode
> negotiation can be fulfilled in its vendor specific implementations.
> Later on, TX EQTR can be added post vops negotiate_pwr_mode() and before
> vops pwr_change_notify(PRE_CHANGE).
>
> Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
Looks good to me. Let’s move forward!
Reviewed-by: Bean Huo <beanhuo@micron.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 08/12] scsi: ufs: ufs-qcom: Fixup PAM-4 TX L0_L1_L2_L3 adaptation pattern length
2026-03-21 3:10 ` [PATCH v4 08/12] scsi: ufs: ufs-qcom: Fixup PAM-4 TX L0_L1_L2_L3 adaptation pattern length Can Guo
@ 2026-03-23 9:22 ` Bean Huo
0 siblings, 0 replies; 12+ messages in thread
From: Bean Huo @ 2026-03-23 9:22 UTC (permalink / raw)
To: Can Guo, avri.altman, bvanassche, beanhuo, peter.wang,
martin.petersen, mani
Cc: linux-scsi, James E.J. Bottomley,
open list:ARM/QUALCOMM MAILING LIST, open list
On Fri, 2026-03-20 at 20:10 -0700, Can Guo wrote:
> If HS-G6 Power Mode change handshake is successful and outbound data Lanes
> are expected to transmit ADAPT, M-TX Lanes shall be configured as
>
> if (Adapt Type == REFRESH)
> TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 = PA_PeerRxHsG6AdaptRefreshL0L1L2L3.
> else if (Adapt Type == INITIAL)
> TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 = PA_PeerRxHsG6AdaptInitialL0L1L2L3.
>
> On some platforms, the ADAPT_L0_L1_L2_L3 duration on Host TX Lanes is only
> a half of theoretical ADAPT_L0_L1_L2_L3 duration TADAPT_L0_L1_L2_L3 (in
> PAM-4 UI) calculated from TX_HS_ADAPT_LENGTH_L0_L1_L2_L3.
>
> For such platforms, the workaround is to double the ADAPT_L0_L1_L2_L3
> duration by uplifting TX_HS_ADAPT_LENGTH_L0_L1_L2_L3. UniPro initializes
> TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 during HS-G6 Power Mode change handshake,
> it would be too late for SW to update TX_HS_ADAPT_LENGTH_L0_L1_L2_L3 post
> HS-G6 Power Mode change. Update PA_PeerRxHsG6AdaptRefreshL0L1L2L3 and
> PA_PeerRxHsG6AdaptInitialL0L1L2L3 post Link Startup and before HS-G6
> Power Mode change, so that the UniPro would use the updated value during
> HS-G6 Power Mode change handshake.
>
> Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
Please add my reviewed tag for this patch:
Reviewed-by: Bean Huo <beanhuo@micron.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 09/12] scsi: ufs: ufs-qcom: Implement vops tx_eqtr_notify()
2026-03-21 3:10 ` [PATCH v4 09/12] scsi: ufs: ufs-qcom: Implement vops tx_eqtr_notify() Can Guo
@ 2026-03-23 9:23 ` Bean Huo
0 siblings, 0 replies; 12+ messages in thread
From: Bean Huo @ 2026-03-23 9:23 UTC (permalink / raw)
To: Can Guo, avri.altman, bvanassche, beanhuo, peter.wang,
martin.petersen, mani
Cc: linux-scsi, James E.J. Bottomley,
open list:ARM/QUALCOMM MAILING LIST, open list
On Fri, 2026-03-20 at 20:10 -0700, Can Guo wrote:
> On some platforms, HW does not support triggering TX EQTR from the most
> reliable High-Speed (HS) Gear (HS Gear1), but only allows to trigger TX
> EQTR for the target HS Gear from the same HS Gear. To work around the HW
> limitation, implement vops tx_eqtr_notify() to change Power Mode to the
> target TX EQTR HS Gear prior to TX EQTR procedure and change Power Mode
> back to HS Gear1 (the most reliable gear) post TX EQTR procedure.
>
> Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 10/12] scsi: ufs: ufs-qcom: Implement vops get_rx_fom()
2026-03-21 3:10 ` [PATCH v4 10/12] scsi: ufs: ufs-qcom: Implement vops get_rx_fom() Can Guo
@ 2026-03-23 9:24 ` Bean Huo
0 siblings, 0 replies; 12+ messages in thread
From: Bean Huo @ 2026-03-23 9:24 UTC (permalink / raw)
To: Can Guo, avri.altman, bvanassche, beanhuo, peter.wang,
martin.petersen, mani
Cc: linux-scsi, Alim Akhtar, James E.J. Bottomley, open list,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...
On Fri, 2026-03-20 at 20:10 -0700, Can Guo wrote:
> On some platforms, host's M-PHY RX_FOM Attribute always reads 0, meaning
> SW cannot rely on Figure of Merit (FOM) to identify the optimal TX
> Equalization settings for device's TX Lanes. Implement the vops
> ufs_qcom_get_rx_fom() such that SW can utilize the UFS Eye Opening Monitor
> (EOM) to evaluate the TX Equalization settings for device's TX Lanes.
>
> Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 11/12] scsi: ufs: ufs-qcom: Implement vops apply_tx_eqtr_settings()
2026-03-21 3:10 ` [PATCH v4 11/12] scsi: ufs: ufs-qcom: Implement vops apply_tx_eqtr_settings() Can Guo
@ 2026-03-23 9:24 ` Bean Huo
0 siblings, 0 replies; 12+ messages in thread
From: Bean Huo @ 2026-03-23 9:24 UTC (permalink / raw)
To: Can Guo, avri.altman, bvanassche, beanhuo, peter.wang,
martin.petersen, mani
Cc: linux-scsi, James E.J. Bottomley,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
open list
On Fri, 2026-03-20 at 20:10 -0700, Can Guo wrote:
> On some platforms, when Host Software triggers TX Equalization Training,
> HW does not take TX EQTR settings programmed in PA_TxEQTRSetting, instead
> HW takes TX EQTR settings from PA_TxEQG1Setting. Implement vops
> apply_tx_eqtr_setting() to work around it by programming TX EQTR settings
> to PA_TxEQG1Setting during TX EQTR procedure.
>
> Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 12/12] scsi: ufs: ufs-qcom: Enable TX Equalization
2026-03-21 3:10 ` [PATCH v4 12/12] scsi: ufs: ufs-qcom: Enable TX Equalization Can Guo
@ 2026-03-23 9:25 ` Bean Huo
0 siblings, 0 replies; 12+ messages in thread
From: Bean Huo @ 2026-03-23 9:25 UTC (permalink / raw)
To: Can Guo, avri.altman, bvanassche, beanhuo, peter.wang,
martin.petersen, mani
Cc: linux-scsi, James E.J. Bottomley,
open list:UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER...,
open list
On Fri, 2026-03-20 at 20:10 -0700, Can Guo wrote:
> Enable TX Equalization for hosts with HW version 0x7 and onwards.
>
> Signed-off-by: Can Guo <can.guo@oss.qualcomm.com>
Reviewed-by: Bean Huo <beanhuo@micron.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-03-23 9:34 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20260321031021.1722459-1-can.guo@oss.qualcomm.com>
2026-03-21 3:10 ` [PATCH v4 01/12] scsi: ufs: core: Introduce a new ufshcd vops negotiate_pwr_mode() Can Guo
2026-03-23 9:10 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 08/12] scsi: ufs: ufs-qcom: Fixup PAM-4 TX L0_L1_L2_L3 adaptation pattern length Can Guo
2026-03-23 9:22 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 09/12] scsi: ufs: ufs-qcom: Implement vops tx_eqtr_notify() Can Guo
2026-03-23 9:23 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 10/12] scsi: ufs: ufs-qcom: Implement vops get_rx_fom() Can Guo
2026-03-23 9:24 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 11/12] scsi: ufs: ufs-qcom: Implement vops apply_tx_eqtr_settings() Can Guo
2026-03-23 9:24 ` Bean Huo
2026-03-21 3:10 ` [PATCH v4 12/12] scsi: ufs: ufs-qcom: Enable TX Equalization Can Guo
2026-03-23 9:25 ` Bean Huo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox