linux-usb.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/20] thunderbolt: Rework TMU and CLx support
@ 2023-05-29 10:04 Mika Westerberg
  2023-05-29 10:04 ` [PATCH 01/20] thunderbolt: Introduce tb_switch_downstream_port() Mika Westerberg
                   ` (20 more replies)
  0 siblings, 21 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Hi all,

This series reworks the TMU and CLx support code to match better what we
do elsewhere in the driver and prepares for USB4v2 adaptive TMU support
that we are going to add in the subsequent series (I'm sending that out
later this week). I've split this part as separate from USB4v2 support
hoping that it makes reviewing them easier.

Gil Fine (1):
  thunderbolt: Introduce tb_switch_downstream_port()

Mika Westerberg (19):
  thunderbolt: Introduce tb_xdomain_downstream_port()
  thunderbolt: Fix a couple of style issues in TMU code
  thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled()
  thunderbolt: Rework Titan Ridge TMU objection disable function
  thunderbolt: Get rid of tb_switch_enable_tmu_1st_child()
  thunderbolt: Move TMU configuration to tb_enable_tmu()
  thunderbolt: Move tb_enable_tmu() close to other TMU functions
  thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure()
  thunderbolt: Move CLx support functions into clx.c
  thunderbolt: Get rid of __tb_switch_[en|dis]able_clx()
  thunderbolt: Move CLx enabling into tb_enable_clx()
  thunderbolt: Switch CL states from enum to a bitmask
  thunderbolt: Check for first depth router in tb.c
  thunderbolt: Do not call CLx functions from TMU code
  thunderbolt: Prefix TMU post time log message with "TMU: "
  thunderbolt: Prefix CL state related log messages with "CLx: "
  thunderbolt: Initialize CL states from the hardware
  thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled
  thunderbolt: Disable CL states when a DMA tunnel is established

 drivers/thunderbolt/Makefile  |   2 +-
 drivers/thunderbolt/acpi.c    |   5 +-
 drivers/thunderbolt/clx.c     | 416 ++++++++++++++++++++++++++++++++++
 drivers/thunderbolt/debugfs.c |  35 ++-
 drivers/thunderbolt/icm.c     |  24 +-
 drivers/thunderbolt/switch.c  | 378 +-----------------------------
 drivers/thunderbolt/tb.c      | 227 +++++++++++++------
 drivers/thunderbolt/tb.h      | 102 ++++-----
 drivers/thunderbolt/tmu.c     | 152 ++++---------
 drivers/thunderbolt/usb4.c    |   9 +-
 drivers/thunderbolt/xdomain.c |  16 +-
 11 files changed, 719 insertions(+), 647 deletions(-)
 create mode 100644 drivers/thunderbolt/clx.c

-- 
2.39.2


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 01/20] thunderbolt: Introduce tb_switch_downstream_port()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 02/20] thunderbolt: Introduce tb_xdomain_downstream_port() Mika Westerberg
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

From: Gil Fine <gil.fine@intel.com>

Introduce tb_switch_downstream_port() helper function that returns the
downstream port of a parent switch that is connected to the upstream
port of specified switch. From now on, we use it all across the driver
where applicable.

While there fix a whitespace in comment and rename 'downstream' to
'down' to be consistent with the rest of the driver.

Signed-off-by: Gil Fine <gil.fine@intel.com>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/acpi.c   |  5 ++---
 drivers/thunderbolt/icm.c    | 24 ++++++++++--------------
 drivers/thunderbolt/switch.c | 19 +++++++------------
 drivers/thunderbolt/tb.c     |  8 +++-----
 drivers/thunderbolt/tb.h     | 14 ++++++++++++++
 drivers/thunderbolt/tmu.c    | 29 +++++++++++++----------------
 drivers/thunderbolt/usb4.c   |  9 ++++-----
 7 files changed, 53 insertions(+), 55 deletions(-)

diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
index 3514bf65b7a4..38fefd0e5268 100644
--- a/drivers/thunderbolt/acpi.c
+++ b/drivers/thunderbolt/acpi.c
@@ -296,16 +296,15 @@ static bool tb_acpi_bus_match(struct device *dev)
 
 static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
 {
+	struct tb_switch *parent_sw = tb_switch_parent(sw);
 	struct acpi_device *adev = NULL;
-	struct tb_switch *parent_sw;
 
 	/*
 	 * Device routers exists under the downstream facing USB4 port
 	 * of the parent router. Their _ADR is always 0.
 	 */
-	parent_sw = tb_switch_parent(sw);
 	if (parent_sw) {
-		struct tb_port *port = tb_port_at(tb_route(sw), parent_sw);
+		struct tb_port *port = tb_switch_downstream_port(sw);
 		struct acpi_device *port_adev;
 
 		port_adev = acpi_find_child_by_adr(ACPI_COMPANION(&parent_sw->dev),
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index 86521ebb2579..05274caf1466 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -644,13 +644,14 @@ static int add_switch(struct tb_switch *parent_sw, struct tb_switch *sw)
 	return ret;
 }
 
-static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw,
-			  u64 route, u8 connection_id, u8 connection_key,
-			  u8 link, u8 depth, bool boot)
+static void update_switch(struct tb_switch *sw, u64 route, u8 connection_id,
+			  u8 connection_key, u8 link, u8 depth, bool boot)
 {
+	struct tb_switch *parent_sw = tb_switch_parent(sw);
+
 	/* Disconnect from parent */
-	tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
-	/* Re-connect via updated port*/
+	tb_switch_downstream_port(sw)->remote = NULL;
+	/* Re-connect via updated port */
 	tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw);
 
 	/* Update with the new addressing information */
@@ -671,10 +672,7 @@ static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw,
 
 static void remove_switch(struct tb_switch *sw)
 {
-	struct tb_switch *parent_sw;
-
-	parent_sw = tb_to_switch(sw->dev.parent);
-	tb_port_at(tb_route(sw), parent_sw)->remote = NULL;
+	tb_switch_downstream_port(sw)->remote = NULL;
 	tb_switch_remove(sw);
 }
 
@@ -755,7 +753,6 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
 	if (sw) {
 		u8 phy_port, sw_phy_port;
 
-		parent_sw = tb_to_switch(sw->dev.parent);
 		sw_phy_port = tb_phy_port_from_link(sw->link);
 		phy_port = tb_phy_port_from_link(link);
 
@@ -785,7 +782,7 @@ icm_fr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr)
 				route = tb_route(sw);
 			}
 
-			update_switch(parent_sw, sw, route, pkg->connection_id,
+			update_switch(sw, route, pkg->connection_id,
 				      pkg->connection_key, link, depth, boot);
 			tb_switch_put(sw);
 			return;
@@ -1236,9 +1233,8 @@ __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr,
 	if (sw) {
 		/* Update the switch if it is still in the same place */
 		if (tb_route(sw) == route && !!sw->authorized == authorized) {
-			parent_sw = tb_to_switch(sw->dev.parent);
-			update_switch(parent_sw, sw, route, pkg->connection_id,
-				      0, 0, 0, boot);
+			update_switch(sw, route, pkg->connection_id, 0, 0, 0,
+				      boot);
 			tb_switch_put(sw);
 			return;
 		}
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 51e86b5171c7..4f3d02c58c9e 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -2754,7 +2754,6 @@ static int tb_switch_update_link_attributes(struct tb_switch *sw)
  */
 int tb_switch_lane_bonding_enable(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_to_switch(sw->dev.parent);
 	struct tb_port *up, *down;
 	u64 route = tb_route(sw);
 	int ret;
@@ -2766,7 +2765,7 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
 		return 0;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(route, parent);
+	down = tb_switch_downstream_port(sw);
 
 	if (!tb_port_is_width_supported(up, 2) ||
 	    !tb_port_is_width_supported(down, 2))
@@ -2808,7 +2807,6 @@ int tb_switch_lane_bonding_enable(struct tb_switch *sw)
  */
 void tb_switch_lane_bonding_disable(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_to_switch(sw->dev.parent);
 	struct tb_port *up, *down;
 
 	if (!tb_route(sw))
@@ -2818,7 +2816,7 @@ void tb_switch_lane_bonding_disable(struct tb_switch *sw)
 	if (!up->bonded)
 		return;
 
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 
 	tb_port_lane_bonding_disable(up);
 	tb_port_lane_bonding_disable(down);
@@ -3476,7 +3474,6 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
 
 static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *up, *down;
 	int ret;
 
@@ -3484,7 +3481,7 @@ static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
 		return 0;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 	ret = tb_port_pm_secondary_enable(up);
 	if (ret)
 		return ret;
@@ -3494,7 +3491,6 @@ static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
 
 static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	bool up_clx_support, down_clx_support;
 	struct tb_port *up, *down;
 	int ret;
@@ -3510,7 +3506,7 @@ static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
 		return 0;
 
 	/* Enable CLx only for first hop router (depth = 1) */
-	if (tb_route(parent))
+	if (tb_route(tb_switch_parent(sw)))
 		return 0;
 
 	ret = tb_switch_pm_secondary_resolve(sw);
@@ -3518,7 +3514,7 @@ static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
 		return ret;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 
 	up_clx_support = tb_port_clx_supported(up, clx);
 	down_clx_support = tb_port_clx_supported(down, clx);
@@ -3594,7 +3590,6 @@ int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
 
 static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *up, *down;
 	int ret;
 
@@ -3609,11 +3604,11 @@ static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
 		return 0;
 
 	/* Disable CLx only for first hop router (depth = 1) */
-	if (tb_route(parent))
+	if (tb_route(tb_switch_parent(sw)))
 		return 0;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 	ret = tb_port_clx_disable(up, clx);
 	if (ret)
 		return ret;
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index c1af712ca728..1ab3aa114a17 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -628,7 +628,7 @@ static int tb_tunnel_usb3(struct tb *tb, struct tb_switch *sw)
 	 * Look up available down port. Since we are chaining it should
 	 * be found right above this switch.
 	 */
-	port = tb_port_at(tb_route(sw), parent);
+	port = tb_switch_downstream_port(sw);
 	down = tb_find_usb3_down(parent, port);
 	if (!down)
 		return 0;
@@ -1378,7 +1378,6 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
 {
 	struct tb_port *up, *down, *port;
 	struct tb_cm *tcm = tb_priv(tb);
-	struct tb_switch *parent_sw;
 	struct tb_tunnel *tunnel;
 
 	up = tb_switch_find_port(sw, TB_TYPE_PCIE_UP);
@@ -1389,9 +1388,8 @@ static int tb_tunnel_pci(struct tb *tb, struct tb_switch *sw)
 	 * Look up available down port. Since we are chaining it should
 	 * be found right above this switch.
 	 */
-	parent_sw = tb_to_switch(sw->dev.parent);
-	port = tb_port_at(tb_route(sw), parent_sw);
-	down = tb_find_pcie_down(parent_sw, port);
+	port = tb_switch_downstream_port(sw);
+	down = tb_find_pcie_down(tb_switch_parent(sw), port);
 	if (!down)
 		return 0;
 
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index f7728c7acdda..beaeea679e10 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -857,6 +857,20 @@ static inline struct tb_switch *tb_switch_parent(struct tb_switch *sw)
 	return tb_to_switch(sw->dev.parent);
 }
 
+/**
+ * tb_switch_downstream_port() - Return downstream facing port of parent router
+ * @sw: Device router pointer
+ *
+ * Only call for device routers. Returns the downstream facing port of
+ * the parent router.
+ */
+static inline struct tb_port *tb_switch_downstream_port(struct tb_switch *sw)
+{
+	if (WARN_ON(!tb_route(sw)))
+		return NULL;
+	return tb_port_at(tb_route(sw), tb_switch_parent(sw));
+}
+
 static inline bool tb_switch_is_light_ridge(const struct tb_switch *sw)
 {
 	return sw->config.vendor_id == PCI_VENDOR_ID_INTEL &&
diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 626aca3124b1..a46203b33c5f 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -398,11 +398,10 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
 
 	if (tb_route(sw)) {
 		bool unidirectional = sw->tmu.unidirectional;
-		struct tb_switch *parent = tb_switch_parent(sw);
 		struct tb_port *down, *up;
 		int ret;
 
-		down = tb_port_at(tb_route(sw), parent);
+		down = tb_switch_downstream_port(sw);
 		up = tb_upstream_port(sw);
 		/*
 		 * In case of uni-directional time sync, TMU handshake is
@@ -442,10 +441,9 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
 
 static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *down, *up;
 
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 	up = tb_upstream_port(sw);
 	/*
 	 * In case of any failure in one of the steps when setting
@@ -457,7 +455,8 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
 	tb_port_tmu_time_sync_disable(down);
 	tb_port_tmu_time_sync_disable(up);
 	if (unidirectional)
-		tb_switch_tmu_rate_write(parent, TB_SWITCH_TMU_RATE_OFF);
+		tb_switch_tmu_rate_write(tb_switch_parent(sw),
+					 TB_SWITCH_TMU_RATE_OFF);
 	else
 		tb_switch_tmu_rate_write(sw, TB_SWITCH_TMU_RATE_OFF);
 
@@ -472,12 +471,11 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
  */
 static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *up, *down;
 	int ret;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 
 	ret = tb_port_tmu_unidirectional_disable(up);
 	if (ret)
@@ -537,13 +535,13 @@ static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw)
  */
 static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *up, *down;
 	int ret;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(tb_route(sw), parent);
-	ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request);
+	down = tb_switch_downstream_port(sw);
+	ret = tb_switch_tmu_rate_write(tb_switch_parent(sw),
+				       sw->tmu.rate_request);
 	if (ret)
 		return ret;
 
@@ -576,10 +574,9 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
 
 static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *down, *up;
 
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 	up = tb_upstream_port(sw);
 	/*
 	 * In case of any failure in one of the steps when change mode,
@@ -589,7 +586,7 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
 	 */
 	tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional);
 	if (sw->tmu.unidirectional_request)
-		tb_switch_tmu_rate_write(parent, sw->tmu.rate);
+		tb_switch_tmu_rate_write(tb_switch_parent(sw), sw->tmu.rate);
 	else
 		tb_switch_tmu_rate_write(sw, sw->tmu.rate);
 
@@ -599,18 +596,18 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
 
 static int __tb_switch_tmu_change_mode(struct tb_switch *sw)
 {
-	struct tb_switch *parent = tb_switch_parent(sw);
 	struct tb_port *up, *down;
 	int ret;
 
 	up = tb_upstream_port(sw);
-	down = tb_port_at(tb_route(sw), parent);
+	down = tb_switch_downstream_port(sw);
 	ret = tb_port_tmu_set_unidirectional(down, sw->tmu.unidirectional_request);
 	if (ret)
 		goto out;
 
 	if (sw->tmu.unidirectional_request)
-		ret = tb_switch_tmu_rate_write(parent, sw->tmu.rate_request);
+		ret = tb_switch_tmu_rate_write(tb_switch_parent(sw),
+					       sw->tmu.rate_request);
 	else
 		ret = tb_switch_tmu_rate_write(sw, sw->tmu.rate_request);
 	if (ret)
diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
index 485b6e430686..9f5a98347bee 100644
--- a/drivers/thunderbolt/usb4.c
+++ b/drivers/thunderbolt/usb4.c
@@ -234,8 +234,8 @@ static bool link_is_usb4(struct tb_port *port)
  */
 int usb4_switch_setup(struct tb_switch *sw)
 {
-	struct tb_port *downstream_port;
-	struct tb_switch *parent;
+	struct tb_switch *parent = tb_switch_parent(sw);
+	struct tb_port *down;
 	bool tbt3, xhci;
 	u32 val = 0;
 	int ret;
@@ -249,9 +249,8 @@ int usb4_switch_setup(struct tb_switch *sw)
 	if (ret)
 		return ret;
 
-	parent = tb_switch_parent(sw);
-	downstream_port = tb_port_at(tb_route(sw), parent);
-	sw->link_usb4 = link_is_usb4(downstream_port);
+	down = tb_switch_downstream_port(sw);
+	sw->link_usb4 = link_is_usb4(down);
 	tb_sw_dbg(sw, "link: %s\n", sw->link_usb4 ? "USB4" : "TBT");
 
 	xhci = val & ROUTER_CS_6_HCI;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 02/20] thunderbolt: Introduce tb_xdomain_downstream_port()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
  2023-05-29 10:04 ` [PATCH 01/20] thunderbolt: Introduce tb_switch_downstream_port() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 03/20] thunderbolt: Fix a couple of style issues in TMU code Mika Westerberg
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

In the same way we did for the routers add a function that returns the
parent routers downstream facing port for XDomain devices.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.h      | 11 +++++++++++
 drivers/thunderbolt/xdomain.c | 16 +++++++---------
 2 files changed, 18 insertions(+), 9 deletions(-)

diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index beaeea679e10..797d8bb73bfa 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -1197,6 +1197,17 @@ static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd)
 	return tb_to_switch(xd->dev.parent);
 }
 
+/**
+ * tb_xdomain_downstream_port() - Return downstream facing port of parent router
+ * @xd: Xdomain pointer
+ *
+ * Returns the downstream port the XDomain is connected to.
+ */
+static inline struct tb_port *tb_xdomain_downstream_port(struct tb_xdomain *xd)
+{
+	return tb_port_at(xd->route, tb_xdomain_parent(xd));
+}
+
 int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf,
 			size_t size);
 int tb_retimer_scan(struct tb_port *port, bool add);
diff --git a/drivers/thunderbolt/xdomain.c b/drivers/thunderbolt/xdomain.c
index e2b54887d331..8389961b2d45 100644
--- a/drivers/thunderbolt/xdomain.c
+++ b/drivers/thunderbolt/xdomain.c
@@ -537,9 +537,8 @@ static int tb_xdp_link_state_status_request(struct tb_ctl *ctl, u64 route,
 static int tb_xdp_link_state_status_response(struct tb *tb, struct tb_ctl *ctl,
 					     struct tb_xdomain *xd, u8 sequence)
 {
-	struct tb_switch *sw = tb_to_switch(xd->dev.parent);
 	struct tb_xdp_link_state_status_response res;
-	struct tb_port *port = tb_port_at(xd->route, sw);
+	struct tb_port *port = tb_xdomain_downstream_port(xd);
 	u32 val[2];
 	int ret;
 
@@ -1137,7 +1136,7 @@ static int tb_xdomain_update_link_attributes(struct tb_xdomain *xd)
 	struct tb_port *port;
 	int ret;
 
-	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+	port = tb_xdomain_downstream_port(xd);
 
 	ret = tb_port_get_link_speed(port);
 	if (ret < 0)
@@ -1251,8 +1250,7 @@ static int tb_xdomain_get_link_status(struct tb_xdomain *xd)
 static int tb_xdomain_link_state_change(struct tb_xdomain *xd,
 					unsigned int width)
 {
-	struct tb_switch *sw = tb_to_switch(xd->dev.parent);
-	struct tb_port *port = tb_port_at(xd->route, sw);
+	struct tb_port *port = tb_xdomain_downstream_port(xd);
 	struct tb *tb = xd->tb;
 	u8 tlw, tls;
 	u32 val;
@@ -1309,7 +1307,7 @@ static int tb_xdomain_bond_lanes_uuid_high(struct tb_xdomain *xd)
 		return -ETIMEDOUT;
 	}
 
-	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+	port = tb_xdomain_downstream_port(xd);
 
 	/*
 	 * We can't use tb_xdomain_lane_bonding_enable() here because it
@@ -1425,7 +1423,7 @@ static int tb_xdomain_get_properties(struct tb_xdomain *xd)
 		if (xd->bonding_possible) {
 			struct tb_port *port;
 
-			port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+			port = tb_xdomain_downstream_port(xd);
 			if (!port->bonded)
 				tb_port_disable(port->dual_link_port);
 		}
@@ -1979,7 +1977,7 @@ int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
 	struct tb_port *port;
 	int ret;
 
-	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+	port = tb_xdomain_downstream_port(xd);
 	if (!port->dual_link_port)
 		return -ENODEV;
 
@@ -2024,7 +2022,7 @@ void tb_xdomain_lane_bonding_disable(struct tb_xdomain *xd)
 {
 	struct tb_port *port;
 
-	port = tb_port_at(xd->route, tb_xdomain_parent(xd));
+	port = tb_xdomain_downstream_port(xd);
 	if (port->dual_link_port) {
 		tb_port_lane_bonding_disable(port);
 		if (tb_port_wait_for_link_width(port, 1, 100) == -ETIMEDOUT)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 03/20] thunderbolt: Fix a couple of style issues in TMU code
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
  2023-05-29 10:04 ` [PATCH 01/20] thunderbolt: Introduce tb_switch_downstream_port() Mika Westerberg
  2023-05-29 10:04 ` [PATCH 02/20] thunderbolt: Introduce tb_xdomain_downstream_port() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 04/20] thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled() Mika Westerberg
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Drop extra empty line and get rid of the '__' in function names. No
functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tmu.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index a46203b33c5f..8614e154be5f 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -395,7 +395,6 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
 	if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF)
 		return 0;
 
-
 	if (tb_route(sw)) {
 		bool unidirectional = sw->tmu.unidirectional;
 		struct tb_port *down, *up;
@@ -439,7 +438,7 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
 	return 0;
 }
 
-static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
+static void tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
 {
 	struct tb_port *down, *up;
 
@@ -469,7 +468,7 @@ static void __tb_switch_tmu_off(struct tb_switch *sw, bool unidirectional)
  * This function is called when the previous TMU mode was
  * TB_SWITCH_TMU_RATE_OFF.
  */
-static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
+static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
 {
 	struct tb_port *up, *down;
 	int ret;
@@ -500,7 +499,7 @@ static int __tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
 	return 0;
 
 out:
-	__tb_switch_tmu_off(sw, false);
+	tb_switch_tmu_off(sw, false);
 	return ret;
 }
 
@@ -533,7 +532,7 @@ static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw)
  * This function is called when the previous TMU mode was
  * TB_SWITCH_TMU_RATE_OFF.
  */
-static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
+static int tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
 {
 	struct tb_port *up, *down;
 	int ret;
@@ -568,11 +567,11 @@ static int __tb_switch_tmu_enable_unidirectional(struct tb_switch *sw)
 	return 0;
 
 out:
-	__tb_switch_tmu_off(sw, true);
+	tb_switch_tmu_off(sw, true);
 	return ret;
 }
 
-static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
+static void tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
 {
 	struct tb_port *down, *up;
 
@@ -594,7 +593,7 @@ static void __tb_switch_tmu_change_mode_prev(struct tb_switch *sw)
 	tb_port_tmu_set_unidirectional(up, sw->tmu.unidirectional);
 }
 
-static int __tb_switch_tmu_change_mode(struct tb_switch *sw)
+static int tb_switch_tmu_change_mode(struct tb_switch *sw)
 {
 	struct tb_port *up, *down;
 	int ret;
@@ -632,7 +631,7 @@ static int __tb_switch_tmu_change_mode(struct tb_switch *sw)
 	return 0;
 
 out:
-	__tb_switch_tmu_change_mode_prev(sw);
+	tb_switch_tmu_change_mode_prev(sw);
 	return ret;
 }
 
@@ -695,13 +694,13 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 		 */
 		if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF) {
 			if (unidirectional)
-				ret = __tb_switch_tmu_enable_unidirectional(sw);
+				ret = tb_switch_tmu_enable_unidirectional(sw);
 			else
-				ret = __tb_switch_tmu_enable_bidirectional(sw);
+				ret = tb_switch_tmu_enable_bidirectional(sw);
 			if (ret)
 				return ret;
 		} else if (sw->tmu.rate == TB_SWITCH_TMU_RATE_NORMAL) {
-			ret = __tb_switch_tmu_change_mode(sw);
+			ret = tb_switch_tmu_change_mode(sw);
 			if (ret)
 				return ret;
 		}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 04/20] thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (2 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 03/20] thunderbolt: Fix a couple of style issues in TMU code Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 05/20] thunderbolt: Rework Titan Ridge TMU objection disable function Mika Westerberg
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

There is no point passing it as we already have a field for that. While
there clean up the kernel-doc of things that do not really belong to the
API documentation (these can be figured out from the spec itself).

No functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c  |  2 +-
 drivers/thunderbolt/tb.h  | 10 ++++------
 drivers/thunderbolt/tmu.c | 11 ++++-------
 3 files changed, 9 insertions(+), 14 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 1ab3aa114a17..72041e29e544 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -362,7 +362,7 @@ static int tb_enable_tmu(struct tb_switch *sw)
 	int ret;
 
 	/* If it is already enabled in correct mode, don't touch it */
-	if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request))
+	if (tb_switch_tmu_is_enabled(sw))
 		return 0;
 
 	ret = tb_switch_tmu_disable(sw);
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 797d8bb73bfa..0ac653bfd97e 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -995,16 +995,14 @@ void tb_switch_enable_tmu_1st_child(struct tb_switch *sw,
 /**
  * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled
  * @sw: Router whose TMU mode to check
- * @unidirectional: If uni-directional (bi-directional otherwise)
  *
- * Return true if hardware TMU configuration matches the one passed in
- * as parameter. That is HiFi/Normal and either uni-directional or bi-directional.
+ * Return true if hardware TMU configuration matches the requested
+ * configuration.
  */
-static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw,
-					    bool unidirectional)
+static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
 {
 	return sw->tmu.rate == sw->tmu.rate_request &&
-	       sw->tmu.unidirectional == unidirectional;
+	       sw->tmu.unidirectional == sw->tmu.unidirectional_request;
 }
 
 static inline const char *tb_switch_clx_name(enum tb_clx clx)
diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 8614e154be5f..5d508ea8baa5 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -639,12 +639,9 @@ static int tb_switch_tmu_change_mode(struct tb_switch *sw)
  * tb_switch_tmu_enable() - Enable TMU on a router
  * @sw: Router whose TMU to enable
  *
- * Enables TMU of a router to be in uni-directional Normal/HiFi
- * or bi-directional HiFi mode. Calling tb_switch_tmu_configure() is required
- * before calling this function, to select the mode Normal/HiFi and
- * directionality (uni-directional/bi-directional).
- * In HiFi mode all tunneling should work. In Normal mode, DP tunneling can't
- * work. Uni-directional mode is required for CLx (Link Low-Power) to work.
+ * Enables TMU of a router to be in uni-directional Normal/HiFi or
+ * bi-directional HiFi mode. Calling tb_switch_tmu_configure() is
+ * required before calling this function.
  */
 int tb_switch_tmu_enable(struct tb_switch *sw)
 {
@@ -662,7 +659,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 	if (!tb_switch_is_clx_supported(sw))
 		return 0;
 
-	if (tb_switch_tmu_is_enabled(sw, sw->tmu.unidirectional_request))
+	if (tb_switch_tmu_is_enabled(sw))
 		return 0;
 
 	if (tb_switch_is_titan_ridge(sw) && unidirectional) {
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 05/20] thunderbolt: Rework Titan Ridge TMU objection disable function
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (3 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 04/20] thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 06/20] thunderbolt: Get rid of tb_switch_enable_tmu_1st_child() Mika Westerberg
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Now this is split into two with one having a misleading name
(tb_switch_tmu_unidirectional_enable()).

Make this easier to read, rename and consolidate the two functions into
one with name that explains what it actually does. Use the two constants
as well that were added but never used to make it clear which bits are
being set.

No functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tmu.c | 24 ++++++++++--------------
 1 file changed, 10 insertions(+), 14 deletions(-)

diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 5d508ea8baa5..30f18806abb7 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -503,8 +503,10 @@ static int tb_switch_tmu_enable_bidirectional(struct tb_switch *sw)
 	return ret;
 }
 
-static int tb_switch_tmu_objection_mask(struct tb_switch *sw)
+/* Only needed for Titan Ridge */
+static int tb_switch_tmu_disable_objections(struct tb_switch *sw)
 {
+	struct tb_port *up = tb_upstream_port(sw);
 	u32 val;
 	int ret;
 
@@ -515,17 +517,15 @@ static int tb_switch_tmu_objection_mask(struct tb_switch *sw)
 
 	val &= ~TB_TIME_VSEC_3_CS_9_TMU_OBJ_MASK;
 
-	return tb_sw_write(sw, &val, TB_CFG_SWITCH,
-			   sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1);
-}
-
-static int tb_switch_tmu_unidirectional_enable(struct tb_switch *sw)
-{
-	struct tb_port *up = tb_upstream_port(sw);
+	ret = tb_sw_write(sw, &val, TB_CFG_SWITCH,
+			  sw->cap_vsec_tmu + TB_TIME_VSEC_3_CS_9, 1);
+	if (ret)
+		return ret;
 
 	return tb_port_tmu_write(up, TMU_ADP_CS_6,
 				 TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK,
-				 TMU_ADP_CS_6_DISABLE_TMU_OBJ_MASK);
+				 TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL1 |
+				 TMU_ADP_CS_6_DISABLE_TMU_OBJ_CL2);
 }
 
 /*
@@ -670,11 +670,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 		if (!tb_switch_is_clx_enabled(sw, TB_CL1))
 			return -EOPNOTSUPP;
 
-		ret = tb_switch_tmu_objection_mask(sw);
-		if (ret)
-			return ret;
-
-		ret = tb_switch_tmu_unidirectional_enable(sw);
+		ret = tb_switch_tmu_disable_objections(sw);
 		if (ret)
 			return ret;
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 06/20] thunderbolt: Get rid of tb_switch_enable_tmu_1st_child()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (4 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 05/20] thunderbolt: Rework Titan Ridge TMU objection disable function Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 07/20] thunderbolt: Move TMU configuration to tb_enable_tmu() Mika Westerberg
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

This is better to be part of the software connection manager flows in
tb.c. Also name the new function tb_increase_tmu_accuracy() to match
what it actually does.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c  | 43 +++++++++++++++++++++++++++++++--------
 drivers/thunderbolt/tb.h  |  2 --
 drivers/thunderbolt/tmu.c | 29 --------------------------
 3 files changed, 34 insertions(+), 40 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 72041e29e544..39ec7094fe17 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -240,6 +240,38 @@ static void tb_discover_dp_resources(struct tb *tb)
 	}
 }
 
+static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data)
+{
+	struct tb_switch *sw;
+
+	sw = tb_to_switch(dev);
+	if (sw) {
+		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI,
+					tb_switch_is_clx_enabled(sw, TB_CL1));
+		if (tb_switch_tmu_enable(sw))
+			tb_sw_warn(sw, "failed to increase TMU rate\n");
+	}
+
+	return 0;
+}
+
+static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel)
+{
+	struct tb_switch *sw;
+
+	if (!tunnel)
+		return;
+
+	/*
+	 * Once first DP tunnel is established we change the TMU
+	 * accuracy of first depth child routers (and the host router)
+	 * to the highest. This is needed for the DP tunneling to work
+	 * but also allows CL0s.
+	 */
+	sw = tunnel->tb->root_switch;
+	device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy);
+}
+
 static void tb_switch_discover_tunnels(struct tb_switch *sw,
 				       struct list_head *list,
 				       bool alloc_hopids)
@@ -253,13 +285,7 @@ static void tb_switch_discover_tunnels(struct tb_switch *sw,
 		switch (port->config.type) {
 		case TB_TYPE_DP_HDMI_IN:
 			tunnel = tb_tunnel_discover_dp(tb, port, alloc_hopids);
-			/*
-			 * In case of DP tunnel exists, change host router's
-			 * 1st children TMU mode to HiFi for CL0s to work.
-			 */
-			if (tunnel)
-				tb_switch_enable_tmu_1st_child(tb->root_switch,
-						TB_SWITCH_TMU_RATE_HIFI);
+			tb_increase_tmu_accuracy(tunnel);
 			break;
 
 		case TB_TYPE_PCIE_DOWN:
@@ -1263,8 +1289,7 @@ static void tb_tunnel_dp(struct tb *tb)
 	 * In case of DP tunnel exists, change host router's 1st children
 	 * TMU mode to HiFi for CL0s to work.
 	 */
-	tb_switch_enable_tmu_1st_child(tb->root_switch, TB_SWITCH_TMU_RATE_HIFI);
-
+	tb_increase_tmu_accuracy(tunnel);
 	return;
 
 err_free:
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 0ac653bfd97e..8cc64b79f35c 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -990,8 +990,6 @@ int tb_switch_tmu_enable(struct tb_switch *sw);
 void tb_switch_tmu_configure(struct tb_switch *sw,
 			     enum tb_switch_tmu_rate rate,
 			     bool unidirectional);
-void tb_switch_enable_tmu_1st_child(struct tb_switch *sw,
-				    enum tb_switch_tmu_rate rate);
 /**
  * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled
  * @sw: Router whose TMU mode to check
diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 30f18806abb7..84abb783a6d9 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -731,32 +731,3 @@ void tb_switch_tmu_configure(struct tb_switch *sw,
 	sw->tmu.unidirectional_request = unidirectional;
 	sw->tmu.rate_request = rate;
 }
-
-static int tb_switch_tmu_config_enable(struct device *dev, void *rate)
-{
-	if (tb_is_switch(dev)) {
-		struct tb_switch *sw = tb_to_switch(dev);
-
-		tb_switch_tmu_configure(sw, *(enum tb_switch_tmu_rate *)rate,
-					tb_switch_is_clx_enabled(sw, TB_CL1));
-		if (tb_switch_tmu_enable(sw))
-			tb_sw_dbg(sw, "fail switching TMU mode for 1st depth router\n");
-	}
-
-	return 0;
-}
-
-/**
- * tb_switch_enable_tmu_1st_child - Configure and enable TMU for 1st chidren
- * @sw: The router to configure and enable it's children TMU
- * @rate: Rate of the TMU to configure the router's chidren to
- *
- * Configures and enables the TMU mode of 1st depth children of the specified
- * router to the specified rate.
- */
-void tb_switch_enable_tmu_1st_child(struct tb_switch *sw,
-				    enum tb_switch_tmu_rate rate)
-{
-	device_for_each_child(&sw->dev, &rate,
-			      tb_switch_tmu_config_enable);
-}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 07/20] thunderbolt: Move TMU configuration to tb_enable_tmu()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (5 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 06/20] thunderbolt: Get rid of tb_switch_enable_tmu_1st_child() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 08/20] thunderbolt: Move tb_enable_tmu() close to other TMU functions Mika Westerberg
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

There is no need to duplicate the code the enables TMU. Also update the
comment to better explain why we do this in the first place.

No functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c | 30 ++++++++++--------------------
 1 file changed, 10 insertions(+), 20 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 39ec7094fe17..0630b877136e 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -387,6 +387,16 @@ static int tb_enable_tmu(struct tb_switch *sw)
 {
 	int ret;
 
+	/*
+	 * If CL1 is enabled then we need to configure the TMU accuracy
+	 * level to normal. Otherwise we keep the TMU running at the
+	 * highest accuracy.
+	 */
+	if (tb_switch_is_clx_enabled(sw, TB_CL1))
+		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
+	else
+		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
+
 	/* If it is already enabled in correct mode, don't touch it */
 	if (tb_switch_tmu_is_enabled(sw))
 		return 0;
@@ -873,16 +883,6 @@ static void tb_scan_port(struct tb_port *port)
 				   tb_switch_clx_name(TB_CL1));
 	}
 
-	if (tb_switch_is_clx_enabled(sw, TB_CL1))
-		/*
-		 * To support highest CLx state, we set router's TMU to
-		 * Normal-Uni mode.
-		 */
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
-	else
-		/* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
-
 	if (tb_enable_tmu(sw))
 		tb_sw_warn(sw, "failed to enable TMU\n");
 
@@ -2035,16 +2035,6 @@ static void tb_restore_children(struct tb_switch *sw)
 		tb_sw_warn(sw, "failed to re-enable %s on upstream port\n",
 			   tb_switch_clx_name(TB_CL1));
 
-	if (tb_switch_is_clx_enabled(sw, TB_CL1))
-		/*
-		 * To support highest CLx state, we set router's TMU to
-		 * Normal-Uni mode.
-		 */
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
-	else
-		/* If CLx disabled, configure router's TMU to HiFi-Bidir mode*/
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
-
 	if (tb_enable_tmu(sw))
 		tb_sw_warn(sw, "failed to restore TMU configuration\n");
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 08/20] thunderbolt: Move tb_enable_tmu() close to other TMU functions
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (6 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 07/20] thunderbolt: Move TMU configuration to tb_enable_tmu() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 09/20] thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure() Mika Westerberg
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

This makes the code easier to follow. No functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c | 58 ++++++++++++++++++++--------------------
 1 file changed, 29 insertions(+), 29 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 0630b877136e..41c353f462e7 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -272,6 +272,35 @@ static void tb_increase_tmu_accuracy(struct tb_tunnel *tunnel)
 	device_for_each_child(&sw->dev, NULL, tb_increase_switch_tmu_accuracy);
 }
 
+static int tb_enable_tmu(struct tb_switch *sw)
+{
+	int ret;
+
+	/*
+	 * If CL1 is enabled then we need to configure the TMU accuracy
+	 * level to normal. Otherwise we keep the TMU running at the
+	 * highest accuracy.
+	 */
+	if (tb_switch_is_clx_enabled(sw, TB_CL1))
+		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
+	else
+		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
+
+	/* If it is already enabled in correct mode, don't touch it */
+	if (tb_switch_tmu_is_enabled(sw))
+		return 0;
+
+	ret = tb_switch_tmu_disable(sw);
+	if (ret)
+		return ret;
+
+	ret = tb_switch_tmu_post_time(sw);
+	if (ret)
+		return ret;
+
+	return tb_switch_tmu_enable(sw);
+}
+
 static void tb_switch_discover_tunnels(struct tb_switch *sw,
 				       struct list_head *list,
 				       bool alloc_hopids)
@@ -383,35 +412,6 @@ static void tb_scan_xdomain(struct tb_port *port)
 	}
 }
 
-static int tb_enable_tmu(struct tb_switch *sw)
-{
-	int ret;
-
-	/*
-	 * If CL1 is enabled then we need to configure the TMU accuracy
-	 * level to normal. Otherwise we keep the TMU running at the
-	 * highest accuracy.
-	 */
-	if (tb_switch_is_clx_enabled(sw, TB_CL1))
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
-	else
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
-
-	/* If it is already enabled in correct mode, don't touch it */
-	if (tb_switch_tmu_is_enabled(sw))
-		return 0;
-
-	ret = tb_switch_tmu_disable(sw);
-	if (ret)
-		return ret;
-
-	ret = tb_switch_tmu_post_time(sw);
-	if (ret)
-		return ret;
-
-	return tb_switch_tmu_enable(sw);
-}
-
 /**
  * tb_find_unused_port() - return the first inactive port on @sw
  * @sw: Switch to find the port on
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 09/20] thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (7 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 08/20] thunderbolt: Move tb_enable_tmu() close to other TMU functions Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 10/20] thunderbolt: Move CLx support functions into clx.c Mika Westerberg
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Instead of at enable time we can do this already in
tb_switch_tmu_configure().

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c  |  6 ++++--
 drivers/thunderbolt/tb.h  |  5 ++---
 drivers/thunderbolt/tmu.c | 13 ++++++++-----
 3 files changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 41c353f462e7..91459bf2fd0f 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -282,9 +282,11 @@ static int tb_enable_tmu(struct tb_switch *sw)
 	 * highest accuracy.
 	 */
 	if (tb_switch_is_clx_enabled(sw, TB_CL1))
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
+		ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
 	else
-		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
+		ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
+	if (ret)
+		return ret;
 
 	/* If it is already enabled in correct mode, don't touch it */
 	if (tb_switch_tmu_is_enabled(sw))
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 8cc64b79f35c..07e4e7b37f13 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -987,9 +987,8 @@ int tb_switch_tmu_init(struct tb_switch *sw);
 int tb_switch_tmu_post_time(struct tb_switch *sw);
 int tb_switch_tmu_disable(struct tb_switch *sw);
 int tb_switch_tmu_enable(struct tb_switch *sw);
-void tb_switch_tmu_configure(struct tb_switch *sw,
-			     enum tb_switch_tmu_rate rate,
-			     bool unidirectional);
+int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate,
+			    bool unidirectional);
 /**
  * tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled
  * @sw: Router whose TMU mode to check
diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 84abb783a6d9..be310d97ea7b 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -648,9 +648,6 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 	bool unidirectional = sw->tmu.unidirectional_request;
 	int ret;
 
-	if (unidirectional && !sw->tmu.has_ucap)
-		return -EOPNOTSUPP;
-
 	/*
 	 * No need to enable TMU on devices that don't support CLx since on
 	 * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi
@@ -724,10 +721,16 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
  *
  * Selects the rate of the TMU and directionality (uni-directional or
  * bi-directional). Must be called before tb_switch_tmu_enable().
+ *
+ * Returns %0 in success and negative errno otherwise.
  */
-void tb_switch_tmu_configure(struct tb_switch *sw,
-			     enum tb_switch_tmu_rate rate, bool unidirectional)
+int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_rate rate,
+			    bool unidirectional)
 {
+	if (unidirectional && !sw->tmu.has_ucap)
+		return -EINVAL;
+
 	sw->tmu.unidirectional_request = unidirectional;
 	sw->tmu.rate_request = rate;
+	return 0;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 10/20] thunderbolt: Move CLx support functions into clx.c
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (8 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 09/20] thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 11/20] thunderbolt: Get rid of __tb_switch_[en|dis]able_clx() Mika Westerberg
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

There really don't belong to switch.c so move them into their own file.
As we do this rename the functions to match the conventions used
elsewhere in the driver.

No functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/Makefile  |   2 +-
 drivers/thunderbolt/clx.c     | 362 ++++++++++++++++++++++++++++++++++
 drivers/thunderbolt/debugfs.c |   2 +-
 drivers/thunderbolt/switch.c  | 362 +---------------------------------
 drivers/thunderbolt/tb.c      |   8 +-
 drivers/thunderbolt/tb.h      |  17 +-
 drivers/thunderbolt/tmu.c     |   6 +-
 7 files changed, 381 insertions(+), 378 deletions(-)
 create mode 100644 drivers/thunderbolt/clx.c

diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index 78fd365893c1..c8b3d7b78098 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -2,7 +2,7 @@
 obj-${CONFIG_USB4} := thunderbolt.o
 thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
 thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
-thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o
+thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o clx.o
 
 thunderbolt-${CONFIG_ACPI} += acpi.o
 thunderbolt-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
new file mode 100644
index 000000000000..d5b46a8e57c9
--- /dev/null
+++ b/drivers/thunderbolt/clx.c
@@ -0,0 +1,362 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * CLx support
+ *
+ * Copyright (C) 2020 - 2023, Intel Corporation
+ * Authors: Gil Fine <gil.fine@intel.com>
+ *	    Mika Westerberg <mika.westerberg@linux.intel.com>
+ */
+
+#include <linux/module.h>
+
+#include "tb.h"
+
+static bool clx_enabled = true;
+module_param_named(clx, clx_enabled, bool, 0444);
+MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)");
+
+static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary)
+{
+	u32 phy;
+	int ret;
+
+	ret = tb_port_read(port, &phy, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+
+	if (secondary)
+		phy |= LANE_ADP_CS_1_PMS;
+	else
+		phy &= ~LANE_ADP_CS_1_PMS;
+
+	return tb_port_write(port, &phy, TB_CFG_PORT,
+			     port->cap_phy + LANE_ADP_CS_1, 1);
+}
+
+static int tb_port_pm_secondary_enable(struct tb_port *port)
+{
+	return tb_port_pm_secondary_set(port, true);
+}
+
+static int tb_port_pm_secondary_disable(struct tb_port *port)
+{
+	return tb_port_pm_secondary_set(port, false);
+}
+
+/* Called for USB4 or Titan Ridge routers only */
+static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask)
+{
+	u32 val, mask = 0;
+	bool ret;
+
+	/* Don't enable CLx in case of two single-lane links */
+	if (!port->bonded && port->dual_link_port)
+		return false;
+
+	/* Don't enable CLx in case of inter-domain link */
+	if (port->xdomain)
+		return false;
+
+	if (tb_switch_is_usb4(port->sw)) {
+		if (!usb4_port_clx_supported(port))
+			return false;
+	} else if (!tb_lc_is_clx_supported(port)) {
+		return false;
+	}
+
+	if (clx_mask & TB_CL1) {
+		/* CL0s and CL1 are enabled and supported together */
+		mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT;
+	}
+	if (clx_mask & TB_CL2)
+		mask |= LANE_ADP_CS_0_CL2_SUPPORT;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_0, 1);
+	if (ret)
+		return false;
+
+	return !!(val & mask);
+}
+
+static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable)
+{
+	u32 phy, mask;
+	int ret;
+
+	/* CL0s and CL1 are enabled and supported together */
+	if (clx == TB_CL1)
+		mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
+	else
+		/* For now we support only CL0s and CL1. Not CL2 */
+		return -EOPNOTSUPP;
+
+	ret = tb_port_read(port, &phy, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+
+	if (enable)
+		phy |= mask;
+	else
+		phy &= ~mask;
+
+	return tb_port_write(port, &phy, TB_CFG_PORT,
+			     port->cap_phy + LANE_ADP_CS_1, 1);
+}
+
+static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx)
+{
+	return tb_port_clx_set(port, clx, false);
+}
+
+static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx)
+{
+	return tb_port_clx_set(port, clx, true);
+}
+
+/**
+ * tb_port_clx_is_enabled() - Is given CL state enabled
+ * @port: USB4 port to check
+ * @clx_mask: Mask of CL states to check
+ *
+ * Returns true if any of the given CL states is enabled for @port.
+ */
+bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask)
+{
+	u32 val, mask = 0;
+	int ret;
+
+	if (!tb_port_clx_supported(port, clx_mask))
+		return false;
+
+	if (clx_mask & TB_CL1)
+		mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
+	if (clx_mask & TB_CL2)
+		mask |= LANE_ADP_CS_1_CL2_ENABLE;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return false;
+
+	return !!(val & mask);
+}
+
+static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
+{
+	struct tb_port *up, *down;
+	int ret;
+
+	if (!tb_route(sw))
+		return 0;
+
+	up = tb_upstream_port(sw);
+	down = tb_switch_downstream_port(sw);
+	ret = tb_port_pm_secondary_enable(up);
+	if (ret)
+		return ret;
+
+	return tb_port_pm_secondary_disable(down);
+}
+
+static int tb_switch_mask_clx_objections(struct tb_switch *sw)
+{
+	int up_port = sw->config.upstream_port_number;
+	u32 offset, val[2], mask_obj, unmask_obj;
+	int ret, i;
+
+	/* Only Titan Ridge of pre-USB4 devices support CLx states */
+	if (!tb_switch_is_titan_ridge(sw))
+		return 0;
+
+	if (!tb_route(sw))
+		return 0;
+
+	/*
+	 * In Titan Ridge there are only 2 dual-lane Thunderbolt ports:
+	 * Port A consists of lane adapters 1,2 and
+	 * Port B consists of lane adapters 3,4
+	 * If upstream port is A, (lanes are 1,2), we mask objections from
+	 * port B (lanes 3,4) and unmask objections from Port A and vice-versa.
+	 */
+	if (up_port == 1) {
+		mask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
+		unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
+		offset = TB_LOW_PWR_C1_CL1;
+	} else {
+		mask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
+		unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
+		offset = TB_LOW_PWR_C3_CL1;
+	}
+
+	ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
+			 sw->cap_lp + offset, ARRAY_SIZE(val));
+	if (ret)
+		return ret;
+
+	for (i = 0; i < ARRAY_SIZE(val); i++) {
+		val[i] |= mask_obj;
+		val[i] &= ~unmask_obj;
+	}
+
+	return tb_sw_write(sw, &val, TB_CFG_SWITCH,
+			   sw->cap_lp + offset, ARRAY_SIZE(val));
+}
+
+static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
+{
+	bool up_clx_support, down_clx_support;
+	struct tb_port *up, *down;
+	int ret;
+
+	if (!tb_switch_clx_is_supported(sw))
+		return 0;
+
+	/*
+	 * Enable CLx for host router's downstream port as part of the
+	 * downstream router enabling procedure.
+	 */
+	if (!tb_route(sw))
+		return 0;
+
+	/* Enable CLx only for first hop router (depth = 1) */
+	if (tb_route(tb_switch_parent(sw)))
+		return 0;
+
+	ret = tb_switch_pm_secondary_resolve(sw);
+	if (ret)
+		return ret;
+
+	up = tb_upstream_port(sw);
+	down = tb_switch_downstream_port(sw);
+
+	up_clx_support = tb_port_clx_supported(up, clx);
+	down_clx_support = tb_port_clx_supported(down, clx);
+
+	tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx),
+		    up_clx_support ? "" : "not ");
+	tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx),
+		    down_clx_support ? "" : "not ");
+
+	if (!up_clx_support || !down_clx_support)
+		return -EOPNOTSUPP;
+
+	ret = tb_port_clx_enable(up, clx);
+	if (ret)
+		return ret;
+
+	ret = tb_port_clx_enable(down, clx);
+	if (ret) {
+		tb_port_clx_disable(up, clx);
+		return ret;
+	}
+
+	ret = tb_switch_mask_clx_objections(sw);
+	if (ret) {
+		tb_port_clx_disable(up, clx);
+		tb_port_clx_disable(down, clx);
+		return ret;
+	}
+
+	sw->clx = clx;
+
+	tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx));
+	return 0;
+}
+
+/**
+ * tb_switch_clx_enable() - Enable CLx on upstream port of specified router
+ * @sw: Router to enable CLx for
+ * @clx: The CLx state to enable
+ *
+ * Enable CLx state only for first hop router. That is the most common
+ * use-case, that is intended for better thermal management, and so helps
+ * to improve performance. CLx is enabled only if both sides of the link
+ * support CLx, and if both sides of the link are not configured as two
+ * single lane links and only if the link is not inter-domain link. The
+ * complete set of conditions is described in CM Guide 1.0 section 8.1.
+ *
+ * Return: Returns 0 on success or an error code on failure.
+ */
+int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx)
+{
+	struct tb_switch *root_sw = sw->tb->root_switch;
+
+	if (!clx_enabled)
+		return 0;
+
+	/*
+	 * CLx is not enabled and validated on Intel USB4 platforms before
+	 * Alder Lake.
+	 */
+	if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
+		return 0;
+
+	switch (clx) {
+	case TB_CL1:
+		/* CL0s and CL1 are enabled and supported together */
+		return __tb_switch_enable_clx(sw, clx);
+
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
+{
+	struct tb_port *up, *down;
+	int ret;
+
+	if (!tb_switch_clx_is_supported(sw))
+		return 0;
+
+	/*
+	 * Disable CLx for host router's downstream port as part of the
+	 * downstream router enabling procedure.
+	 */
+	if (!tb_route(sw))
+		return 0;
+
+	/* Disable CLx only for first hop router (depth = 1) */
+	if (tb_route(tb_switch_parent(sw)))
+		return 0;
+
+	up = tb_upstream_port(sw);
+	down = tb_switch_downstream_port(sw);
+	ret = tb_port_clx_disable(up, clx);
+	if (ret)
+		return ret;
+
+	ret = tb_port_clx_disable(down, clx);
+	if (ret)
+		return ret;
+
+	sw->clx = TB_CLX_DISABLE;
+
+	tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx));
+	return 0;
+}
+
+/**
+ * tb_switch_cls_disable() - Disable CLx on upstream port of specified router
+ * @sw: Router to disable CLx for
+ * @clx: The CLx state to disable
+ *
+ * Return: Returns 0 on success or an error code on failure.
+ */
+int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx)
+{
+	if (!clx_enabled)
+		return 0;
+
+	switch (clx) {
+	case TB_CL1:
+		/* CL0s and CL1 are enabled and supported together */
+		return __tb_switch_disable_clx(sw, clx);
+
+	default:
+		return -EOPNOTSUPP;
+	}
+}
diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c
index f92ad71ef983..e376ad25bf60 100644
--- a/drivers/thunderbolt/debugfs.c
+++ b/drivers/thunderbolt/debugfs.c
@@ -570,7 +570,7 @@ static int margining_run_write(void *data, u64 val)
 	 * CL states may interfere with lane margining so inform the user know
 	 * and bail out.
 	 */
-	if (tb_port_is_clx_enabled(port, TB_CL1 | TB_CL2)) {
+	if (tb_port_clx_is_enabled(port, TB_CL1 | TB_CL2)) {
 		tb_port_warn(port,
 			     "CL states are enabled, Disable them with clx=0 and re-connect\n");
 		ret = -EINVAL;
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 4f3d02c58c9e..984b5536e143 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -26,10 +26,6 @@ struct nvm_auth_status {
 	u32 status;
 };
 
-static bool clx_enabled = true;
-module_param_named(clx, clx_enabled, bool, 0444);
-MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)");
-
 /*
  * Hold NVM authentication failure status per switch This information
  * needs to stay around even when the switch gets power cycled so we
@@ -1183,135 +1179,6 @@ int tb_port_update_credits(struct tb_port *port)
 	return tb_port_do_update_credits(port->dual_link_port);
 }
 
-static int __tb_port_pm_secondary_set(struct tb_port *port, bool secondary)
-{
-	u32 phy;
-	int ret;
-
-	ret = tb_port_read(port, &phy, TB_CFG_PORT,
-			   port->cap_phy + LANE_ADP_CS_1, 1);
-	if (ret)
-		return ret;
-
-	if (secondary)
-		phy |= LANE_ADP_CS_1_PMS;
-	else
-		phy &= ~LANE_ADP_CS_1_PMS;
-
-	return tb_port_write(port, &phy, TB_CFG_PORT,
-			     port->cap_phy + LANE_ADP_CS_1, 1);
-}
-
-static int tb_port_pm_secondary_enable(struct tb_port *port)
-{
-	return __tb_port_pm_secondary_set(port, true);
-}
-
-static int tb_port_pm_secondary_disable(struct tb_port *port)
-{
-	return __tb_port_pm_secondary_set(port, false);
-}
-
-/* Called for USB4 or Titan Ridge routers only */
-static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask)
-{
-	u32 val, mask = 0;
-	bool ret;
-
-	/* Don't enable CLx in case of two single-lane links */
-	if (!port->bonded && port->dual_link_port)
-		return false;
-
-	/* Don't enable CLx in case of inter-domain link */
-	if (port->xdomain)
-		return false;
-
-	if (tb_switch_is_usb4(port->sw)) {
-		if (!usb4_port_clx_supported(port))
-			return false;
-	} else if (!tb_lc_is_clx_supported(port)) {
-		return false;
-	}
-
-	if (clx_mask & TB_CL1) {
-		/* CL0s and CL1 are enabled and supported together */
-		mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT;
-	}
-	if (clx_mask & TB_CL2)
-		mask |= LANE_ADP_CS_0_CL2_SUPPORT;
-
-	ret = tb_port_read(port, &val, TB_CFG_PORT,
-			   port->cap_phy + LANE_ADP_CS_0, 1);
-	if (ret)
-		return false;
-
-	return !!(val & mask);
-}
-
-static int __tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable)
-{
-	u32 phy, mask;
-	int ret;
-
-	/* CL0s and CL1 are enabled and supported together */
-	if (clx == TB_CL1)
-		mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
-	else
-		/* For now we support only CL0s and CL1. Not CL2 */
-		return -EOPNOTSUPP;
-
-	ret = tb_port_read(port, &phy, TB_CFG_PORT,
-			   port->cap_phy + LANE_ADP_CS_1, 1);
-	if (ret)
-		return ret;
-
-	if (enable)
-		phy |= mask;
-	else
-		phy &= ~mask;
-
-	return tb_port_write(port, &phy, TB_CFG_PORT,
-			     port->cap_phy + LANE_ADP_CS_1, 1);
-}
-
-static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx)
-{
-	return __tb_port_clx_set(port, clx, false);
-}
-
-static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx)
-{
-	return __tb_port_clx_set(port, clx, true);
-}
-
-/**
- * tb_port_is_clx_enabled() - Is given CL state enabled
- * @port: USB4 port to check
- * @clx_mask: Mask of CL states to check
- *
- * Returns true if any of the given CL states is enabled for @port.
- */
-bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx_mask)
-{
-	u32 val, mask = 0;
-	int ret;
-
-	if (!tb_port_clx_supported(port, clx_mask))
-		return false;
-
-	if (clx_mask & TB_CL1)
-		mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
-	if (clx_mask & TB_CL2)
-		mask |= LANE_ADP_CS_1_CL2_ENABLE;
-
-	ret = tb_port_read(port, &val, TB_CFG_PORT,
-			   port->cap_phy + LANE_ADP_CS_1, 1);
-	if (ret)
-		return false;
-
-	return !!(val & mask);
-}
-
 static int tb_port_start_lane_initialization(struct tb_port *port)
 {
 	int ret;
@@ -3246,8 +3113,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
 	 * done for USB4 device too as CLx is re-enabled at resume.
 	 * CL0s and CL1 are enabled and supported together.
 	 */
-	if (tb_switch_is_clx_enabled(sw, TB_CL1)) {
-		if (tb_switch_disable_clx(sw, TB_CL1))
+	if (tb_switch_clx_is_enabled(sw, TB_CL1)) {
+		if (tb_switch_clx_disable(sw, TB_CL1))
 			tb_sw_warn(sw, "failed to disable %s on upstream port\n",
 				   tb_switch_clx_name(TB_CL1));
 	}
@@ -3472,231 +3339,6 @@ struct tb_port *tb_switch_find_port(struct tb_switch *sw,
 	return NULL;
 }
 
-static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
-{
-	struct tb_port *up, *down;
-	int ret;
-
-	if (!tb_route(sw))
-		return 0;
-
-	up = tb_upstream_port(sw);
-	down = tb_switch_downstream_port(sw);
-	ret = tb_port_pm_secondary_enable(up);
-	if (ret)
-		return ret;
-
-	return tb_port_pm_secondary_disable(down);
-}
-
-static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
-{
-	bool up_clx_support, down_clx_support;
-	struct tb_port *up, *down;
-	int ret;
-
-	if (!tb_switch_is_clx_supported(sw))
-		return 0;
-
-	/*
-	 * Enable CLx for host router's downstream port as part of the
-	 * downstream router enabling procedure.
-	 */
-	if (!tb_route(sw))
-		return 0;
-
-	/* Enable CLx only for first hop router (depth = 1) */
-	if (tb_route(tb_switch_parent(sw)))
-		return 0;
-
-	ret = tb_switch_pm_secondary_resolve(sw);
-	if (ret)
-		return ret;
-
-	up = tb_upstream_port(sw);
-	down = tb_switch_downstream_port(sw);
-
-	up_clx_support = tb_port_clx_supported(up, clx);
-	down_clx_support = tb_port_clx_supported(down, clx);
-
-	tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx),
-		    up_clx_support ? "" : "not ");
-	tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx),
-		    down_clx_support ? "" : "not ");
-
-	if (!up_clx_support || !down_clx_support)
-		return -EOPNOTSUPP;
-
-	ret = tb_port_clx_enable(up, clx);
-	if (ret)
-		return ret;
-
-	ret = tb_port_clx_enable(down, clx);
-	if (ret) {
-		tb_port_clx_disable(up, clx);
-		return ret;
-	}
-
-	ret = tb_switch_mask_clx_objections(sw);
-	if (ret) {
-		tb_port_clx_disable(up, clx);
-		tb_port_clx_disable(down, clx);
-		return ret;
-	}
-
-	sw->clx = clx;
-
-	tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx));
-	return 0;
-}
-
-/**
- * tb_switch_enable_clx() - Enable CLx on upstream port of specified router
- * @sw: Router to enable CLx for
- * @clx: The CLx state to enable
- *
- * Enable CLx state only for first hop router. That is the most common
- * use-case, that is intended for better thermal management, and so helps
- * to improve performance. CLx is enabled only if both sides of the link
- * support CLx, and if both sides of the link are not configured as two
- * single lane links and only if the link is not inter-domain link. The
- * complete set of conditions is described in CM Guide 1.0 section 8.1.
- *
- * Return: Returns 0 on success or an error code on failure.
- */
-int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
-{
-	struct tb_switch *root_sw = sw->tb->root_switch;
-
-	if (!clx_enabled)
-		return 0;
-
-	/*
-	 * CLx is not enabled and validated on Intel USB4 platforms before
-	 * Alder Lake.
-	 */
-	if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
-		return 0;
-
-	switch (clx) {
-	case TB_CL1:
-		/* CL0s and CL1 are enabled and supported together */
-		return __tb_switch_enable_clx(sw, clx);
-
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
-static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
-{
-	struct tb_port *up, *down;
-	int ret;
-
-	if (!tb_switch_is_clx_supported(sw))
-		return 0;
-
-	/*
-	 * Disable CLx for host router's downstream port as part of the
-	 * downstream router enabling procedure.
-	 */
-	if (!tb_route(sw))
-		return 0;
-
-	/* Disable CLx only for first hop router (depth = 1) */
-	if (tb_route(tb_switch_parent(sw)))
-		return 0;
-
-	up = tb_upstream_port(sw);
-	down = tb_switch_downstream_port(sw);
-	ret = tb_port_clx_disable(up, clx);
-	if (ret)
-		return ret;
-
-	ret = tb_port_clx_disable(down, clx);
-	if (ret)
-		return ret;
-
-	sw->clx = TB_CLX_DISABLE;
-
-	tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx));
-	return 0;
-}
-
-/**
- * tb_switch_disable_clx() - Disable CLx on upstream port of specified router
- * @sw: Router to disable CLx for
- * @clx: The CLx state to disable
- *
- * Return: Returns 0 on success or an error code on failure.
- */
-int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
-{
-	if (!clx_enabled)
-		return 0;
-
-	switch (clx) {
-	case TB_CL1:
-		/* CL0s and CL1 are enabled and supported together */
-		return __tb_switch_disable_clx(sw, clx);
-
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
-/**
- * tb_switch_mask_clx_objections() - Mask CLx objections for a router
- * @sw: Router to mask objections for
- *
- * Mask the objections coming from the second depth routers in order to
- * stop these objections from interfering with the CLx states of the first
- * depth link.
- */
-int tb_switch_mask_clx_objections(struct tb_switch *sw)
-{
-	int up_port = sw->config.upstream_port_number;
-	u32 offset, val[2], mask_obj, unmask_obj;
-	int ret, i;
-
-	/* Only Titan Ridge of pre-USB4 devices support CLx states */
-	if (!tb_switch_is_titan_ridge(sw))
-		return 0;
-
-	if (!tb_route(sw))
-		return 0;
-
-	/*
-	 * In Titan Ridge there are only 2 dual-lane Thunderbolt ports:
-	 * Port A consists of lane adapters 1,2 and
-	 * Port B consists of lane adapters 3,4
-	 * If upstream port is A, (lanes are 1,2), we mask objections from
-	 * port B (lanes 3,4) and unmask objections from Port A and vice-versa.
-	 */
-	if (up_port == 1) {
-		mask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
-		unmask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
-		offset = TB_LOW_PWR_C1_CL1;
-	} else {
-		mask_obj = TB_LOW_PWR_C1_PORT_A_MASK;
-		unmask_obj = TB_LOW_PWR_C0_PORT_B_MASK;
-		offset = TB_LOW_PWR_C3_CL1;
-	}
-
-	ret = tb_sw_read(sw, &val, TB_CFG_SWITCH,
-			 sw->cap_lp + offset, ARRAY_SIZE(val));
-	if (ret)
-		return ret;
-
-	for (i = 0; i < ARRAY_SIZE(val); i++) {
-		val[i] |= mask_obj;
-		val[i] &= ~unmask_obj;
-	}
-
-	return tb_sw_write(sw, &val, TB_CFG_SWITCH,
-			   sw->cap_lp + offset, ARRAY_SIZE(val));
-}
-
 /*
  * Can be used for read/write a specified PCIe bridge for any Thunderbolt 3
  * device. For now used only for Titan Ridge.
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 91459bf2fd0f..c7cfd740520a 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -247,7 +247,7 @@ static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data)
 	sw = tb_to_switch(dev);
 	if (sw) {
 		tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI,
-					tb_switch_is_clx_enabled(sw, TB_CL1));
+					tb_switch_clx_is_enabled(sw, TB_CL1));
 		if (tb_switch_tmu_enable(sw))
 			tb_sw_warn(sw, "failed to increase TMU rate\n");
 	}
@@ -281,7 +281,7 @@ static int tb_enable_tmu(struct tb_switch *sw)
 	 * level to normal. Otherwise we keep the TMU running at the
 	 * highest accuracy.
 	 */
-	if (tb_switch_is_clx_enabled(sw, TB_CL1))
+	if (tb_switch_clx_is_enabled(sw, TB_CL1))
 		ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_NORMAL, true);
 	else
 		ret = tb_switch_tmu_configure(sw, TB_SWITCH_TMU_RATE_HIFI, false);
@@ -879,7 +879,7 @@ static void tb_scan_port(struct tb_port *port)
 	if (discovery) {
 		tb_sw_dbg(sw, "discovery, not touching CL states\n");
 	} else {
-		ret = tb_switch_enable_clx(sw, TB_CL1);
+		ret = tb_switch_clx_enable(sw, TB_CL1);
 		if (ret && ret != -EOPNOTSUPP)
 			tb_sw_warn(sw, "failed to enable %s on upstream port\n",
 				   tb_switch_clx_name(TB_CL1));
@@ -2032,7 +2032,7 @@ static void tb_restore_children(struct tb_switch *sw)
 	 * CL0s and CL1 are enabled and supported together.
 	 * Silently ignore CLx re-enabling in case CLx is not supported.
 	 */
-	ret = tb_switch_enable_clx(sw, TB_CL1);
+	ret = tb_switch_clx_enable(sw, TB_CL1);
 	if (ret && ret != -EOPNOTSUPP)
 		tb_sw_warn(sw, "failed to re-enable %s on upstream port\n",
 			   tb_switch_clx_name(TB_CL1));
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 07e4e7b37f13..d29bc7eab051 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -1002,6 +1002,8 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
 	       sw->tmu.unidirectional == sw->tmu.unidirectional_request;
 }
 
+bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask);
+
 static inline const char *tb_switch_clx_name(enum tb_clx clx)
 {
 	switch (clx) {
@@ -1013,28 +1015,28 @@ static inline const char *tb_switch_clx_name(enum tb_clx clx)
 	}
 }
 
-int tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx);
-int tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx);
+int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx);
+int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx);
 
 /**
- * tb_switch_is_clx_enabled() - Checks if the CLx is enabled
+ * tb_switch_clx_is_enabled() - Checks if the CLx is enabled
  * @sw: Router to check for the CLx
  * @clx: The CLx state to check for
  *
  * Checks if the specified CLx is enabled on the router upstream link.
  * Not applicable for a host router.
  */
-static inline bool tb_switch_is_clx_enabled(const struct tb_switch *sw,
+static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw,
 					    enum tb_clx clx)
 {
 	return sw->clx == clx;
 }
 
 /**
- * tb_switch_is_clx_supported() - Is CLx supported on this type of router
+ * tb_switch_clx_is_supported() - Is CLx supported on this type of router
  * @sw: The router to check CLx support for
  */
-static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw)
+static inline bool tb_switch_clx_is_supported(const struct tb_switch *sw)
 {
 	if (sw->quirks & QUIRK_NO_CLX)
 		return false;
@@ -1042,8 +1044,6 @@ static inline bool tb_switch_is_clx_supported(const struct tb_switch *sw)
 	return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
 }
 
-int tb_switch_mask_clx_objections(struct tb_switch *sw);
-
 int tb_switch_pcie_l1_enable(struct tb_switch *sw);
 
 int tb_switch_xhci_connect(struct tb_switch *sw);
@@ -1089,7 +1089,6 @@ void tb_port_lane_bonding_disable(struct tb_port *port);
 int tb_port_wait_for_link_width(struct tb_port *port, int width,
 				int timeout_msec);
 int tb_port_update_credits(struct tb_port *port);
-bool tb_port_is_clx_enabled(struct tb_port *port, unsigned int clx);
 
 int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec);
 int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap);
diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index be310d97ea7b..6988704c845c 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -388,7 +388,7 @@ int tb_switch_tmu_disable(struct tb_switch *sw)
 	 * on these devices e.g. Alpine Ridge and earlier, the TMU mode
 	 * HiFi bi-directional is enabled by default and we don't change it.
 	 */
-	if (!tb_switch_is_clx_supported(sw))
+	if (!tb_switch_clx_is_supported(sw))
 		return 0;
 
 	/* Already disabled? */
@@ -653,7 +653,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 	 * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi
 	 * bi-directional is enabled by default.
 	 */
-	if (!tb_switch_is_clx_supported(sw))
+	if (!tb_switch_clx_is_supported(sw))
 		return 0;
 
 	if (tb_switch_tmu_is_enabled(sw))
@@ -664,7 +664,7 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 		 * Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are
 		 * enabled and supported together.
 		 */
-		if (!tb_switch_is_clx_enabled(sw, TB_CL1))
+		if (!tb_switch_clx_is_enabled(sw, TB_CL1))
 			return -EOPNOTSUPP;
 
 		ret = tb_switch_tmu_disable_objections(sw);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 11/20] thunderbolt: Get rid of __tb_switch_[en|dis]able_clx()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (9 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 10/20] thunderbolt: Move CLx support functions into clx.c Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 12/20] thunderbolt: Move CLx enabling into tb_enable_clx() Mika Westerberg
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

No need to have separate functions for these so fold them into
tb_switch_clx_enable() and tb_switch_clx_disable() accordingly.

No functional changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c | 91 ++++++++++++++++++---------------------
 1 file changed, 42 insertions(+), 49 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index d5b46a8e57c9..d1a502525425 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -205,12 +205,46 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw)
 			   sw->cap_lp + offset, ARRAY_SIZE(val));
 }
 
-static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
+/**
+ * tb_switch_clx_enable() - Enable CLx on upstream port of specified router
+ * @sw: Router to enable CLx for
+ * @clx: The CLx state to enable
+ *
+ * Enable CLx state only for first hop router. That is the most common
+ * use-case, that is intended for better thermal management, and so helps
+ * to improve performance. CLx is enabled only if both sides of the link
+ * support CLx, and if both sides of the link are not configured as two
+ * single lane links and only if the link is not inter-domain link. The
+ * complete set of conditions is described in CM Guide 1.0 section 8.1.
+ *
+ * Return: Returns 0 on success or an error code on failure.
+ */
+int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx)
 {
+	struct tb_switch *root_sw = sw->tb->root_switch;
 	bool up_clx_support, down_clx_support;
 	struct tb_port *up, *down;
 	int ret;
 
+	if (!clx_enabled)
+		return 0;
+
+	/*
+	 * CLx is not enabled and validated on Intel USB4 platforms before
+	 * Alder Lake.
+	 */
+	if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
+		return 0;
+
+	switch (clx) {
+	case TB_CL1:
+		/* CL0s and CL1 are enabled and supported together */
+		break;
+
+	default:
+		return -EOPNOTSUPP;
+	}
+
 	if (!tb_switch_clx_is_supported(sw))
 		return 0;
 
@@ -267,47 +301,28 @@ static int __tb_switch_enable_clx(struct tb_switch *sw, enum tb_clx clx)
 }
 
 /**
- * tb_switch_clx_enable() - Enable CLx on upstream port of specified router
- * @sw: Router to enable CLx for
- * @clx: The CLx state to enable
- *
- * Enable CLx state only for first hop router. That is the most common
- * use-case, that is intended for better thermal management, and so helps
- * to improve performance. CLx is enabled only if both sides of the link
- * support CLx, and if both sides of the link are not configured as two
- * single lane links and only if the link is not inter-domain link. The
- * complete set of conditions is described in CM Guide 1.0 section 8.1.
+ * tb_switch_clx_disable() - Disable CLx on upstream port of specified router
+ * @sw: Router to disable CLx for
+ * @clx: The CLx state to disable
  *
  * Return: Returns 0 on success or an error code on failure.
  */
-int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx)
+int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx)
 {
-	struct tb_switch *root_sw = sw->tb->root_switch;
+	struct tb_port *up, *down;
+	int ret;
 
 	if (!clx_enabled)
 		return 0;
 
-	/*
-	 * CLx is not enabled and validated on Intel USB4 platforms before
-	 * Alder Lake.
-	 */
-	if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
-		return 0;
-
 	switch (clx) {
 	case TB_CL1:
 		/* CL0s and CL1 are enabled and supported together */
-		return __tb_switch_enable_clx(sw, clx);
+		break;
 
 	default:
 		return -EOPNOTSUPP;
 	}
-}
-
-static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
-{
-	struct tb_port *up, *down;
-	int ret;
 
 	if (!tb_switch_clx_is_supported(sw))
 		return 0;
@@ -338,25 +353,3 @@ static int __tb_switch_disable_clx(struct tb_switch *sw, enum tb_clx clx)
 	tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx));
 	return 0;
 }
-
-/**
- * tb_switch_cls_disable() - Disable CLx on upstream port of specified router
- * @sw: Router to disable CLx for
- * @clx: The CLx state to disable
- *
- * Return: Returns 0 on success or an error code on failure.
- */
-int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx)
-{
-	if (!clx_enabled)
-		return 0;
-
-	switch (clx) {
-	case TB_CL1:
-		/* CL0s and CL1 are enabled and supported together */
-		return __tb_switch_disable_clx(sw, clx);
-
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 12/20] thunderbolt: Move CLx enabling into tb_enable_clx()
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (10 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 11/20] thunderbolt: Get rid of __tb_switch_[en|dis]able_clx() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 13/20] thunderbolt: Switch CL states from enum to a bitmask Mika Westerberg
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

This avoids some duplication and makes the flow slightly easier to
understand. Also follows what we do in tb_enable_tmu().

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tb.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index c7cfd740520a..e4f1233eb958 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -240,6 +240,18 @@ static void tb_discover_dp_resources(struct tb *tb)
 	}
 }
 
+static int tb_enable_clx(struct tb_switch *sw)
+{
+	int ret;
+
+	/*
+	 * CL0s and CL1 are enabled and supported together.
+	 * Silently ignore CLx enabling in case CLx is not supported.
+	 */
+	ret = tb_switch_clx_enable(sw, TB_CL1);
+	return ret == -EOPNOTSUPP ? 0 : ret;
+}
+
 static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data)
 {
 	struct tb_switch *sw;
@@ -777,7 +789,6 @@ static void tb_scan_port(struct tb_port *port)
 	struct tb_port *upstream_port;
 	bool discovery = false;
 	struct tb_switch *sw;
-	int ret;
 
 	if (tb_is_upstream_port(port))
 		return;
@@ -876,14 +887,10 @@ static void tb_scan_port(struct tb_port *port)
 	 * CL0s and CL1 are enabled and supported together.
 	 * Silently ignore CLx enabling in case CLx is not supported.
 	 */
-	if (discovery) {
+	if (discovery)
 		tb_sw_dbg(sw, "discovery, not touching CL states\n");
-	} else {
-		ret = tb_switch_clx_enable(sw, TB_CL1);
-		if (ret && ret != -EOPNOTSUPP)
-			tb_sw_warn(sw, "failed to enable %s on upstream port\n",
-				   tb_switch_clx_name(TB_CL1));
-	}
+	else if (tb_enable_clx(sw))
+		tb_sw_warn(sw, "failed to enable CL states\n");
 
 	if (tb_enable_tmu(sw))
 		tb_sw_warn(sw, "failed to enable TMU\n");
@@ -2022,20 +2029,13 @@ static int tb_suspend_noirq(struct tb *tb)
 static void tb_restore_children(struct tb_switch *sw)
 {
 	struct tb_port *port;
-	int ret;
 
 	/* No need to restore if the router is already unplugged */
 	if (sw->is_unplugged)
 		return;
 
-	/*
-	 * CL0s and CL1 are enabled and supported together.
-	 * Silently ignore CLx re-enabling in case CLx is not supported.
-	 */
-	ret = tb_switch_clx_enable(sw, TB_CL1);
-	if (ret && ret != -EOPNOTSUPP)
-		tb_sw_warn(sw, "failed to re-enable %s on upstream port\n",
-			   tb_switch_clx_name(TB_CL1));
+	if (tb_enable_clx(sw))
+		tb_sw_warn(sw, "failed to re-enable CL states\n");
 
 	if (tb_enable_tmu(sw))
 		tb_sw_warn(sw, "failed to restore TMU configuration\n");
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 13/20] thunderbolt: Switch CL states from enum to a bitmask
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (11 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 12/20] thunderbolt: Move CLx enabling into tb_enable_clx() Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 14/20] thunderbolt: Check for first depth router in tb.c Mika Westerberg
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

This is more natural and follows the hardware register layout better.
This makes it easier to see which CL states we enable (even though they
should be enabled together). Rename 'clx_mask' to 'clx' everywhere as
this is now always bitmask.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c    | 169 ++++++++++++++++++++---------------
 drivers/thunderbolt/switch.c |   7 +-
 drivers/thunderbolt/tb.c     |   2 +-
 drivers/thunderbolt/tb.h     |  54 ++++-------
 4 files changed, 113 insertions(+), 119 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index d1a502525425..4601607f1901 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -45,7 +45,7 @@ static int tb_port_pm_secondary_disable(struct tb_port *port)
 }
 
 /* Called for USB4 or Titan Ridge routers only */
-static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask)
+static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx)
 {
 	u32 val, mask = 0;
 	bool ret;
@@ -65,11 +65,11 @@ static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask)
 		return false;
 	}
 
-	if (clx_mask & TB_CL1) {
-		/* CL0s and CL1 are enabled and supported together */
-		mask |= LANE_ADP_CS_0_CL0S_SUPPORT | LANE_ADP_CS_0_CL1_SUPPORT;
-	}
-	if (clx_mask & TB_CL2)
+	if (clx & TB_CL0S)
+		mask |= LANE_ADP_CS_0_CL0S_SUPPORT;
+	if (clx & TB_CL1)
+		mask |= LANE_ADP_CS_0_CL1_SUPPORT;
+	if (clx & TB_CL2)
 		mask |= LANE_ADP_CS_0_CL2_SUPPORT;
 
 	ret = tb_port_read(port, &val, TB_CFG_PORT,
@@ -80,16 +80,17 @@ static bool tb_port_clx_supported(struct tb_port *port, unsigned int clx_mask)
 	return !!(val & mask);
 }
 
-static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable)
+static int tb_port_clx_set(struct tb_port *port, unsigned int clx, bool enable)
 {
-	u32 phy, mask;
+	u32 phy, mask = 0;
 	int ret;
 
-	/* CL0s and CL1 are enabled and supported together */
-	if (clx == TB_CL1)
-		mask = LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
-	else
-		/* For now we support only CL0s and CL1. Not CL2 */
+	if (clx & TB_CL0S)
+		mask |= LANE_ADP_CS_1_CL0S_ENABLE;
+	if (clx & TB_CL1)
+		mask |= LANE_ADP_CS_1_CL1_ENABLE;
+
+	if (!mask)
 		return -EOPNOTSUPP;
 
 	ret = tb_port_read(port, &phy, TB_CFG_PORT,
@@ -106,12 +107,12 @@ static int tb_port_clx_set(struct tb_port *port, enum tb_clx clx, bool enable)
 			     port->cap_phy + LANE_ADP_CS_1, 1);
 }
 
-static int tb_port_clx_disable(struct tb_port *port, enum tb_clx clx)
+static int tb_port_clx_disable(struct tb_port *port, unsigned int clx)
 {
 	return tb_port_clx_set(port, clx, false);
 }
 
-static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx)
+static int tb_port_clx_enable(struct tb_port *port, unsigned int clx)
 {
 	return tb_port_clx_set(port, clx, true);
 }
@@ -119,21 +120,23 @@ static int tb_port_clx_enable(struct tb_port *port, enum tb_clx clx)
 /**
  * tb_port_clx_is_enabled() - Is given CL state enabled
  * @port: USB4 port to check
- * @clx_mask: Mask of CL states to check
+ * @clx: Mask of CL states to check
  *
  * Returns true if any of the given CL states is enabled for @port.
  */
-bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask)
+bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx)
 {
 	u32 val, mask = 0;
 	int ret;
 
-	if (!tb_port_clx_supported(port, clx_mask))
+	if (!tb_port_clx_supported(port, clx))
 		return false;
 
-	if (clx_mask & TB_CL1)
-		mask |= LANE_ADP_CS_1_CL0S_ENABLE | LANE_ADP_CS_1_CL1_ENABLE;
-	if (clx_mask & TB_CL2)
+	if (clx & TB_CL0S)
+		mask |= LANE_ADP_CS_1_CL0S_ENABLE;
+	if (clx & TB_CL1)
+		mask |= LANE_ADP_CS_1_CL1_ENABLE;
+	if (clx & TB_CL2)
 		mask |= LANE_ADP_CS_1_CL2_ENABLE;
 
 	ret = tb_port_read(port, &val, TB_CFG_PORT,
@@ -205,6 +208,50 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw)
 			   sw->cap_lp + offset, ARRAY_SIZE(val));
 }
 
+/**
+ * tb_switch_clx_is_supported() - Is CLx supported on this type of router
+ * @sw: The router to check CLx support for
+ */
+bool tb_switch_clx_is_supported(const struct tb_switch *sw)
+{
+	if (!clx_enabled)
+		return false;
+
+	if (sw->quirks & QUIRK_NO_CLX)
+		return false;
+
+	/*
+	 * CLx is not enabled and validated on Intel USB4 platforms
+	 * before Alder Lake.
+	 */
+	if (tb_switch_is_tiger_lake(sw))
+		return false;
+
+	return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
+}
+
+static bool validate_mask(unsigned int clx)
+{
+	/* Previous states need to be enabled */
+	if (clx & TB_CL2)
+		return (clx & (TB_CL0S | TB_CL1)) == (TB_CL0S | TB_CL1);
+	if (clx & TB_CL1)
+		return (clx & TB_CL0S) == TB_CL0S;
+	return true;
+}
+
+static const char *clx_name(unsigned int clx)
+{
+	if (clx & TB_CL2)
+		return "CL0s/CL1/CL2";
+	if (clx & TB_CL1)
+		return "CL0s/CL1";
+	if (clx & TB_CL0S)
+		return "CL0s";
+
+	return "unknown";
+}
+
 /**
  * tb_switch_clx_enable() - Enable CLx on upstream port of specified router
  * @sw: Router to enable CLx for
@@ -219,46 +266,32 @@ static int tb_switch_mask_clx_objections(struct tb_switch *sw)
  *
  * Return: Returns 0 on success or an error code on failure.
  */
-int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx)
+int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 {
-	struct tb_switch *root_sw = sw->tb->root_switch;
 	bool up_clx_support, down_clx_support;
+	struct tb_switch *parent_sw;
 	struct tb_port *up, *down;
 	int ret;
 
-	if (!clx_enabled)
-		return 0;
+	if (!validate_mask(clx))
+		return -EINVAL;
 
-	/*
-	 * CLx is not enabled and validated on Intel USB4 platforms before
-	 * Alder Lake.
-	 */
-	if (root_sw->generation < 4 || tb_switch_is_tiger_lake(root_sw))
+	parent_sw = tb_switch_parent(sw);
+	if (!parent_sw)
 		return 0;
 
-	switch (clx) {
-	case TB_CL1:
-		/* CL0s and CL1 are enabled and supported together */
-		break;
-
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	if (!tb_switch_clx_is_supported(sw))
-		return 0;
-
-	/*
-	 * Enable CLx for host router's downstream port as part of the
-	 * downstream router enabling procedure.
-	 */
-	if (!tb_route(sw))
+	if (!tb_switch_clx_is_supported(parent_sw) ||
+	    !tb_switch_clx_is_supported(sw))
 		return 0;
 
 	/* Enable CLx only for first hop router (depth = 1) */
 	if (tb_route(tb_switch_parent(sw)))
 		return 0;
 
+	/* CL2 is not yet supported */
+	if (clx & TB_CL2)
+		return -EOPNOTSUPP;
+
 	ret = tb_switch_pm_secondary_resolve(sw);
 	if (ret)
 		return ret;
@@ -269,9 +302,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx)
 	up_clx_support = tb_port_clx_supported(up, clx);
 	down_clx_support = tb_port_clx_supported(down, clx);
 
-	tb_port_dbg(up, "%s %ssupported\n", tb_switch_clx_name(clx),
+	tb_port_dbg(up, "%s %ssupported\n", clx_name(clx),
 		    up_clx_support ? "" : "not ");
-	tb_port_dbg(down, "%s %ssupported\n", tb_switch_clx_name(clx),
+	tb_port_dbg(down, "%s %ssupported\n", clx_name(clx),
 		    down_clx_support ? "" : "not ");
 
 	if (!up_clx_support || !down_clx_support)
@@ -294,52 +327,40 @@ int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx)
 		return ret;
 	}
 
-	sw->clx = clx;
+	sw->clx |= clx;
 
-	tb_port_dbg(up, "%s enabled\n", tb_switch_clx_name(clx));
+	tb_port_dbg(up, "%s enabled\n", clx_name(clx));
 	return 0;
 }
 
 /**
  * tb_switch_clx_disable() - Disable CLx on upstream port of specified router
  * @sw: Router to disable CLx for
- * @clx: The CLx state to disable
+ *
+ * Disables all CL states of the given router. Can be called on any
+ * router and if the states were not enabled already does nothing.
  *
  * Return: Returns 0 on success or an error code on failure.
  */
-int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx)
+int tb_switch_clx_disable(struct tb_switch *sw)
 {
+	unsigned int clx = sw->clx;
 	struct tb_port *up, *down;
 	int ret;
 
-	if (!clx_enabled)
-		return 0;
-
-	switch (clx) {
-	case TB_CL1:
-		/* CL0s and CL1 are enabled and supported together */
-		break;
-
-	default:
-		return -EOPNOTSUPP;
-	}
-
 	if (!tb_switch_clx_is_supported(sw))
 		return 0;
 
-	/*
-	 * Disable CLx for host router's downstream port as part of the
-	 * downstream router enabling procedure.
-	 */
-	if (!tb_route(sw))
-		return 0;
-
 	/* Disable CLx only for first hop router (depth = 1) */
 	if (tb_route(tb_switch_parent(sw)))
 		return 0;
 
+	if (!clx)
+		return 0;
+
 	up = tb_upstream_port(sw);
 	down = tb_switch_downstream_port(sw);
+
 	ret = tb_port_clx_disable(up, clx);
 	if (ret)
 		return ret;
@@ -348,8 +369,8 @@ int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx)
 	if (ret)
 		return ret;
 
-	sw->clx = TB_CLX_DISABLE;
+	sw->clx = 0;
 
-	tb_port_dbg(up, "%s disabled\n", tb_switch_clx_name(clx));
+	tb_port_dbg(up, "%s disabled\n", clx_name(clx));
 	return 0;
 }
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 984b5536e143..f33a09d92c9b 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -3111,13 +3111,8 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
 	/*
 	 * Actually only needed for Titan Ridge but for simplicity can be
 	 * done for USB4 device too as CLx is re-enabled at resume.
-	 * CL0s and CL1 are enabled and supported together.
 	 */
-	if (tb_switch_clx_is_enabled(sw, TB_CL1)) {
-		if (tb_switch_clx_disable(sw, TB_CL1))
-			tb_sw_warn(sw, "failed to disable %s on upstream port\n",
-				   tb_switch_clx_name(TB_CL1));
-	}
+	tb_switch_clx_disable(sw);
 
 	err = tb_plug_events_active(sw, false);
 	if (err)
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index e4f1233eb958..2d360508aeeb 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -248,7 +248,7 @@ static int tb_enable_clx(struct tb_switch *sw)
 	 * CL0s and CL1 are enabled and supported together.
 	 * Silently ignore CLx enabling in case CLx is not supported.
 	 */
-	ret = tb_switch_clx_enable(sw, TB_CL1);
+	ret = tb_switch_clx_enable(sw, TB_CL0S | TB_CL1);
 	return ret == -EOPNOTSUPP ? 0 : ret;
 }
 
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index d29bc7eab051..72e245639eb8 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -117,13 +117,6 @@ struct tb_switch_tmu {
 	enum tb_switch_tmu_rate rate_request;
 };
 
-enum tb_clx {
-	TB_CLX_DISABLE,
-	/* CL0s and CL1 are enabled and supported together */
-	TB_CL1 = BIT(0),
-	TB_CL2 = BIT(1),
-};
-
 /**
  * struct tb_switch - a thunderbolt switch
  * @dev: Device for the switch
@@ -174,7 +167,7 @@ enum tb_clx {
  * @min_dp_main_credits: Router preferred minimum number of buffers for DP MAIN
  * @max_pcie_credits: Router preferred number of buffers for PCIe
  * @max_dma_credits: Router preferred number of buffers for DMA/P2P
- * @clx: CLx state on the upstream link of the router
+ * @clx: CLx states on the upstream link of the router
  *
  * When the switch is being added or removed to the domain (other
  * switches) you need to have domain lock held.
@@ -225,7 +218,7 @@ struct tb_switch {
 	unsigned int min_dp_main_credits;
 	unsigned int max_pcie_credits;
 	unsigned int max_dma_credits;
-	enum tb_clx clx;
+	unsigned int clx;
 };
 
 /**
@@ -455,6 +448,11 @@ struct tb_path {
 #define TB_WAKE_ON_PCIE		BIT(4)
 #define TB_WAKE_ON_DP		BIT(5)
 
+/* CL states */
+#define TB_CL0S			BIT(0)
+#define TB_CL1			BIT(1)
+#define TB_CL2			BIT(2)
+
 /**
  * struct tb_cm_ops - Connection manager specific operations vector
  * @driver_ready: Called right after control channel is started. Used by
@@ -1002,46 +1000,26 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
 	       sw->tmu.unidirectional == sw->tmu.unidirectional_request;
 }
 
-bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx_mask);
-
-static inline const char *tb_switch_clx_name(enum tb_clx clx)
-{
-	switch (clx) {
-	/* CL0s and CL1 are enabled and supported together */
-	case TB_CL1:
-		return "CL0s/CL1";
-	default:
-		return "unknown";
-	}
-}
+bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx);
 
-int tb_switch_clx_enable(struct tb_switch *sw, enum tb_clx clx);
-int tb_switch_clx_disable(struct tb_switch *sw, enum tb_clx clx);
+bool tb_switch_clx_is_supported(const struct tb_switch *sw);
+int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx);
+int tb_switch_clx_disable(struct tb_switch *sw);
 
 /**
  * tb_switch_clx_is_enabled() - Checks if the CLx is enabled
  * @sw: Router to check for the CLx
- * @clx: The CLx state to check for
+ * @clx: The CLx states to check for
  *
  * Checks if the specified CLx is enabled on the router upstream link.
+ * Returns true if any of the given states is enabled.
+ *
  * Not applicable for a host router.
  */
 static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw,
-					    enum tb_clx clx)
+					    unsigned int clx)
 {
-	return sw->clx == clx;
-}
-
-/**
- * tb_switch_clx_is_supported() - Is CLx supported on this type of router
- * @sw: The router to check CLx support for
- */
-static inline bool tb_switch_clx_is_supported(const struct tb_switch *sw)
-{
-	if (sw->quirks & QUIRK_NO_CLX)
-		return false;
-
-	return tb_switch_is_usb4(sw) || tb_switch_is_titan_ridge(sw);
+	return sw->clx & clx;
 }
 
 int tb_switch_pcie_l1_enable(struct tb_switch *sw);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 14/20] thunderbolt: Check for first depth router in tb.c
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (12 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 13/20] thunderbolt: Switch CL states from enum to a bitmask Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 15/20] thunderbolt: Do not call CLx functions from TMU code Mika Westerberg
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Currently tb_switch_clx_enable() enables CL states only for the first
depth router. This is something we may want to change in the future and
in addition it is not visible from the calling path at all. For this
reason do the check in the tb.c so it is immediately visible that we
only do this for the first depth router. Fix the kernel-docs
accordingly.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c | 22 ++++++----------------
 drivers/thunderbolt/tb.c  | 10 ++++++++++
 2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index 4601607f1901..b8cfbd643311 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -257,14 +257,12 @@ static const char *clx_name(unsigned int clx)
  * @sw: Router to enable CLx for
  * @clx: The CLx state to enable
  *
- * Enable CLx state only for first hop router. That is the most common
- * use-case, that is intended for better thermal management, and so helps
- * to improve performance. CLx is enabled only if both sides of the link
- * support CLx, and if both sides of the link are not configured as two
- * single lane links and only if the link is not inter-domain link. The
- * complete set of conditions is described in CM Guide 1.0 section 8.1.
+ * CLx is enabled only if both sides of the link support CLx, and if both sides
+ * of the link are not configured as two single lane links and only if the link
+ * is not inter-domain link. The complete set of conditions is described in CM
+ * Guide 1.0 section 8.1.
  *
- * Return: Returns 0 on success or an error code on failure.
+ * Returns %0 on success or an error code on failure.
  */
 int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 {
@@ -284,10 +282,6 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 	    !tb_switch_clx_is_supported(sw))
 		return 0;
 
-	/* Enable CLx only for first hop router (depth = 1) */
-	if (tb_route(tb_switch_parent(sw)))
-		return 0;
-
 	/* CL2 is not yet supported */
 	if (clx & TB_CL2)
 		return -EOPNOTSUPP;
@@ -340,7 +334,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
  * Disables all CL states of the given router. Can be called on any
  * router and if the states were not enabled already does nothing.
  *
- * Return: Returns 0 on success or an error code on failure.
+ * Returns %0 on success or an error code on failure.
  */
 int tb_switch_clx_disable(struct tb_switch *sw)
 {
@@ -351,10 +345,6 @@ int tb_switch_clx_disable(struct tb_switch *sw)
 	if (!tb_switch_clx_is_supported(sw))
 		return 0;
 
-	/* Disable CLx only for first hop router (depth = 1) */
-	if (tb_route(tb_switch_parent(sw)))
-		return 0;
-
 	if (!clx)
 		return 0;
 
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 2d360508aeeb..1d056ff6d77f 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -244,6 +244,16 @@ static int tb_enable_clx(struct tb_switch *sw)
 {
 	int ret;
 
+	/*
+	 * Currently only enable CLx for the first link. This is enough
+	 * to allow the CPU to save energy at least on Intel hardware
+	 * and makes it slightly simpler to implement. We may change
+	 * this in the future to cover the whole topology if it turns
+	 * out to be beneficial.
+	 */
+	if (sw->config.depth != 1)
+		return 0;
+
 	/*
 	 * CL0s and CL1 are enabled and supported together.
 	 * Silently ignore CLx enabling in case CLx is not supported.
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 15/20] thunderbolt: Do not call CLx functions from TMU code
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (13 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 14/20] thunderbolt: Check for first depth router in tb.c Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 16/20] thunderbolt: Prefix TMU post time log message with "TMU: " Mika Westerberg
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

There is really no need to call any of the CLx functions in the TMU code
so remove all these checks. This makes the TMU enable/disable flows
easier to follow as well.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tmu.c | 23 -----------------------
 1 file changed, 23 deletions(-)

diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 6988704c845c..7d06bacf24ff 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -383,14 +383,6 @@ int tb_switch_tmu_post_time(struct tb_switch *sw)
  */
 int tb_switch_tmu_disable(struct tb_switch *sw)
 {
-	/*
-	 * No need to disable TMU on devices that don't support CLx since
-	 * on these devices e.g. Alpine Ridge and earlier, the TMU mode
-	 * HiFi bi-directional is enabled by default and we don't change it.
-	 */
-	if (!tb_switch_clx_is_supported(sw))
-		return 0;
-
 	/* Already disabled? */
 	if (sw->tmu.rate == TB_SWITCH_TMU_RATE_OFF)
 		return 0;
@@ -648,25 +640,10 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
 	bool unidirectional = sw->tmu.unidirectional_request;
 	int ret;
 
-	/*
-	 * No need to enable TMU on devices that don't support CLx since on
-	 * these devices e.g. Alpine Ridge and earlier, the TMU mode HiFi
-	 * bi-directional is enabled by default.
-	 */
-	if (!tb_switch_clx_is_supported(sw))
-		return 0;
-
 	if (tb_switch_tmu_is_enabled(sw))
 		return 0;
 
 	if (tb_switch_is_titan_ridge(sw) && unidirectional) {
-		/*
-		 * Titan Ridge supports CL0s and CL1 only. CL0s and CL1 are
-		 * enabled and supported together.
-		 */
-		if (!tb_switch_clx_is_enabled(sw, TB_CL1))
-			return -EOPNOTSUPP;
-
 		ret = tb_switch_tmu_disable_objections(sw);
 		if (ret)
 			return ret;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 16/20] thunderbolt: Prefix TMU post time log message with "TMU: "
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (14 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 15/20] thunderbolt: Do not call CLx functions from TMU code Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 17/20] thunderbolt: Prefix CL state related log messages with "CLx: " Mika Westerberg
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Following what we do with other messages in this file. No functional
changes.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/tmu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/thunderbolt/tmu.c b/drivers/thunderbolt/tmu.c
index 7d06bacf24ff..c926fb71c43d 100644
--- a/drivers/thunderbolt/tmu.c
+++ b/drivers/thunderbolt/tmu.c
@@ -308,7 +308,7 @@ int tb_switch_tmu_post_time(struct tb_switch *sw)
 		return ret;
 
 	for (i = 0; i < ARRAY_SIZE(gm_local_time); i++)
-		tb_sw_dbg(root_switch, "local_time[%d]=0x%08x\n", i,
+		tb_sw_dbg(root_switch, "TMU: local_time[%d]=0x%08x\n", i,
 			  gm_local_time[i]);
 
 	/* Convert to nanoseconds (drop fractional part) */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 17/20] thunderbolt: Prefix CL state related log messages with "CLx: "
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (15 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 16/20] thunderbolt: Prefix TMU post time log message with "TMU: " Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 18/20] thunderbolt: Initialize CL states from the hardware Mika Westerberg
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

This makes it easier to spot from the logs and follows what we do with
the TMU code already. We also log enabling/disabling CL states using the
tb_sw_dbg() instead of tb_port_dbg().

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index b8cfbd643311..5e745386c413 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -296,9 +296,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 	up_clx_support = tb_port_clx_supported(up, clx);
 	down_clx_support = tb_port_clx_supported(down, clx);
 
-	tb_port_dbg(up, "%s %ssupported\n", clx_name(clx),
+	tb_port_dbg(up, "CLx: %s %ssupported\n", clx_name(clx),
 		    up_clx_support ? "" : "not ");
-	tb_port_dbg(down, "%s %ssupported\n", clx_name(clx),
+	tb_port_dbg(down, "CLx: %s %ssupported\n", clx_name(clx),
 		    down_clx_support ? "" : "not ");
 
 	if (!up_clx_support || !down_clx_support)
@@ -323,7 +323,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 
 	sw->clx |= clx;
 
-	tb_port_dbg(up, "%s enabled\n", clx_name(clx));
+	tb_sw_dbg(sw, "CLx: %s enabled\n", clx_name(clx));
 	return 0;
 }
 
@@ -361,6 +361,6 @@ int tb_switch_clx_disable(struct tb_switch *sw)
 
 	sw->clx = 0;
 
-	tb_port_dbg(up, "%s disabled\n", clx_name(clx));
+	tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx));
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 18/20] thunderbolt: Initialize CL states from the hardware
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (16 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 17/20] thunderbolt: Prefix CL state related log messages with "CLx: " Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 19/20] thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled Mika Westerberg
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

In case the boot firmware enabled any of them, read the currently
configured CL states and update the router structure accordingly.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c    | 100 +++++++++++++++++++++++++----------
 drivers/thunderbolt/switch.c |   4 ++
 drivers/thunderbolt/tb.h     |   1 +
 3 files changed, 78 insertions(+), 27 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index 5e745386c413..960409df4405 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -15,6 +15,21 @@ static bool clx_enabled = true;
 module_param_named(clx, clx_enabled, bool, 0444);
 MODULE_PARM_DESC(clx, "allow low power states on the high-speed lanes (default: true)");
 
+static const char *clx_name(unsigned int clx)
+{
+	if (!clx)
+		return "disabled";
+
+	if (clx & TB_CL2)
+		return "CL0s/CL1/CL2";
+	if (clx & TB_CL1)
+		return "CL0s/CL1";
+	if (clx & TB_CL0S)
+		return "CL0s";
+
+	return "unknown";
+}
+
 static int tb_port_pm_secondary_set(struct tb_port *port, bool secondary)
 {
 	u32 phy;
@@ -117,6 +132,29 @@ static int tb_port_clx_enable(struct tb_port *port, unsigned int clx)
 	return tb_port_clx_set(port, clx, true);
 }
 
+static int tb_port_clx(struct tb_port *port)
+{
+	u32 val;
+	int ret;
+
+	if (!tb_port_clx_supported(port, TB_CL0S | TB_CL1 | TB_CL2))
+		return 0;
+
+	ret = tb_port_read(port, &val, TB_CFG_PORT,
+			   port->cap_phy + LANE_ADP_CS_1, 1);
+	if (ret)
+		return ret;
+
+	if (val & LANE_ADP_CS_1_CL0S_ENABLE)
+		ret |= TB_CL0S;
+	if (val & LANE_ADP_CS_1_CL1_ENABLE)
+		ret |= TB_CL1;
+	if (val & LANE_ADP_CS_1_CL2_ENABLE)
+		ret |= TB_CL2;
+
+	return ret;
+}
+
 /**
  * tb_port_clx_is_enabled() - Is given CL state enabled
  * @port: USB4 port to check
@@ -126,25 +164,45 @@ static int tb_port_clx_enable(struct tb_port *port, unsigned int clx)
  */
 bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx)
 {
-	u32 val, mask = 0;
-	int ret;
+	return !!(tb_port_clx(port) & clx);
+}
 
-	if (!tb_port_clx_supported(port, clx))
-		return false;
+/**
+ * tb_switch_clx_init() - Initialize router CL states
+ * @sw: Router
+ *
+ * Can be called for any router. Initializes the current CL state by
+ * reading it from the hardware.
+ *
+ * Returns %0 in case of success and negative errno in case of failure.
+ */
+int tb_switch_clx_init(struct tb_switch *sw)
+{
+	struct tb_port *up, *down;
+	unsigned int clx, tmp;
 
-	if (clx & TB_CL0S)
-		mask |= LANE_ADP_CS_1_CL0S_ENABLE;
-	if (clx & TB_CL1)
-		mask |= LANE_ADP_CS_1_CL1_ENABLE;
-	if (clx & TB_CL2)
-		mask |= LANE_ADP_CS_1_CL2_ENABLE;
+	if (tb_switch_is_icm(sw))
+		return 0;
 
-	ret = tb_port_read(port, &val, TB_CFG_PORT,
-			   port->cap_phy + LANE_ADP_CS_1, 1);
-	if (ret)
-		return false;
+	if (!tb_route(sw))
+		return 0;
 
-	return !!(val & mask);
+	if (!tb_switch_clx_is_supported(sw))
+		return 0;
+
+	up = tb_upstream_port(sw);
+	down = tb_switch_downstream_port(sw);
+
+	clx = tb_port_clx(up);
+	tmp = tb_port_clx(down);
+	if (clx != tmp)
+		tb_sw_warn(sw, "CLx: inconsistent configuration %#x != %#x\n",
+			   clx, tmp);
+
+	tb_sw_dbg(sw, "CLx: current mode: %s\n", clx_name(clx));
+
+	sw->clx = clx;
+	return 0;
 }
 
 static int tb_switch_pm_secondary_resolve(struct tb_switch *sw)
@@ -240,18 +298,6 @@ static bool validate_mask(unsigned int clx)
 	return true;
 }
 
-static const char *clx_name(unsigned int clx)
-{
-	if (clx & TB_CL2)
-		return "CL0s/CL1/CL2";
-	if (clx & TB_CL1)
-		return "CL0s/CL1";
-	if (clx & TB_CL0S)
-		return "CL0s";
-
-	return "unknown";
-}
-
 /**
  * tb_switch_clx_enable() - Enable CLx on upstream port of specified router
  * @sw: Router to enable CLx for
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index f33a09d92c9b..0c11caec7e8e 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -2859,6 +2859,10 @@ int tb_switch_add(struct tb_switch *sw)
 		if (ret)
 			return ret;
 
+		ret = tb_switch_clx_init(sw);
+		if (ret)
+			return ret;
+
 		ret = tb_switch_tmu_init(sw);
 		if (ret)
 			return ret;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 72e245639eb8..58df106aaa5e 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -1002,6 +1002,7 @@ static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
 
 bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx);
 
+int tb_switch_clx_init(struct tb_switch *sw);
 bool tb_switch_clx_is_supported(const struct tb_switch *sw);
 int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx);
 int tb_switch_clx_disable(struct tb_switch *sw);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 19/20] thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (17 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 18/20] thunderbolt: Initialize CL states from the hardware Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-05-29 10:04 ` [PATCH 20/20] thunderbolt: Disable CL states when a DMA tunnel is established Mika Westerberg
  2023-06-09  9:09 ` [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

This allows us to disable all CL states temporarily when running lane
margining and then return back the previously enabled states.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c     |  8 ++++++--
 drivers/thunderbolt/debugfs.c | 35 ++++++++++++++++++++++++-----------
 2 files changed, 30 insertions(+), 13 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index 960409df4405..4f0cfbb24dd9 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -317,6 +317,9 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 	struct tb_port *up, *down;
 	int ret;
 
+	if (!clx)
+		return 0;
+
 	if (!validate_mask(clx))
 		return -EINVAL;
 
@@ -380,7 +383,8 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
  * Disables all CL states of the given router. Can be called on any
  * router and if the states were not enabled already does nothing.
  *
- * Returns %0 on success or an error code on failure.
+ * Returns the CL states that were disabled or negative errno in case of
+ * failure.
  */
 int tb_switch_clx_disable(struct tb_switch *sw)
 {
@@ -408,5 +412,5 @@ int tb_switch_clx_disable(struct tb_switch *sw)
 	sw->clx = 0;
 
 	tb_sw_dbg(sw, "CLx: %s disabled\n", clx_name(clx));
-	return 0;
+	return clx;
 }
diff --git a/drivers/thunderbolt/debugfs.c b/drivers/thunderbolt/debugfs.c
index e376ad25bf60..40b59e662ee3 100644
--- a/drivers/thunderbolt/debugfs.c
+++ b/drivers/thunderbolt/debugfs.c
@@ -553,8 +553,9 @@ static int margining_run_write(void *data, u64 val)
 	struct usb4_port *usb4 = port->usb4;
 	struct tb_switch *sw = port->sw;
 	struct tb_margining *margining;
+	struct tb_switch *down_sw;
 	struct tb *tb = sw->tb;
-	int ret;
+	int ret, clx;
 
 	if (val != 1)
 		return -EINVAL;
@@ -566,15 +567,24 @@ static int margining_run_write(void *data, u64 val)
 		goto out_rpm_put;
 	}
 
-	/*
-	 * CL states may interfere with lane margining so inform the user know
-	 * and bail out.
-	 */
-	if (tb_port_clx_is_enabled(port, TB_CL1 | TB_CL2)) {
-		tb_port_warn(port,
-			     "CL states are enabled, Disable them with clx=0 and re-connect\n");
-		ret = -EINVAL;
-		goto out_unlock;
+	if (tb_is_upstream_port(port))
+		down_sw = sw;
+	else if (port->remote)
+		down_sw = port->remote->sw;
+	else
+		down_sw = NULL;
+
+	if (down_sw) {
+		/*
+		 * CL states may interfere with lane margining so
+		 * disable them temporarily now.
+		 */
+		ret = tb_switch_clx_disable(down_sw);
+		if (ret < 0) {
+			tb_sw_warn(down_sw, "failed to disable CL states\n");
+			goto out_unlock;
+		}
+		clx = ret;
 	}
 
 	margining = usb4->margining;
@@ -586,7 +596,7 @@ static int margining_run_write(void *data, u64 val)
 					  margining->right_high,
 					  USB4_MARGIN_SW_COUNTER_CLEAR);
 		if (ret)
-			goto out_unlock;
+			goto out_clx;
 
 		ret = usb4_port_sw_margin_errors(port, &margining->results[0]);
 	} else {
@@ -600,6 +610,9 @@ static int margining_run_write(void *data, u64 val)
 					  margining->right_high, margining->results);
 	}
 
+out_clx:
+	if (down_sw)
+		tb_switch_clx_enable(down_sw, clx);
 out_unlock:
 	mutex_unlock(&tb->lock);
 out_rpm_put:
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 20/20] thunderbolt: Disable CL states when a DMA tunnel is established
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (18 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 19/20] thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled Mika Westerberg
@ 2023-05-29 10:04 ` Mika Westerberg
  2023-06-09  9:09 ` [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-05-29 10:04 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine, Mika Westerberg

Tunnels between hosts should not have CL states enabled because
otherwise they might enter a low power state without the other end
noticing which causes packets to be lost. For this reason disable all
CL states upon first DMA tunnel creation. Once the last DMA tunnel is
torn down we try to re-enable them.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
---
 drivers/thunderbolt/clx.c |  2 +-
 drivers/thunderbolt/tb.c  | 62 +++++++++++++++++++++++++++++++++++----
 2 files changed, 58 insertions(+), 6 deletions(-)

diff --git a/drivers/thunderbolt/clx.c b/drivers/thunderbolt/clx.c
index 4f0cfbb24dd9..604cceb23659 100644
--- a/drivers/thunderbolt/clx.c
+++ b/drivers/thunderbolt/clx.c
@@ -317,7 +317,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
 	struct tb_port *up, *down;
 	int ret;
 
-	if (!clx)
+	if (!clx || sw->clx == clx)
 		return 0;
 
 	if (!validate_mask(clx))
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 1d056ff6d77f..aa6e11589c28 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -240,8 +240,11 @@ static void tb_discover_dp_resources(struct tb *tb)
 	}
 }
 
+/* Enables CL states up to host router */
 static int tb_enable_clx(struct tb_switch *sw)
 {
+	struct tb_cm *tcm = tb_priv(sw->tb);
+	const struct tb_tunnel *tunnel;
 	int ret;
 
 	/*
@@ -251,9 +254,26 @@ static int tb_enable_clx(struct tb_switch *sw)
 	 * this in the future to cover the whole topology if it turns
 	 * out to be beneficial.
 	 */
+	while (sw && sw->config.depth > 1)
+		sw = tb_switch_parent(sw);
+
+	if (!sw)
+		return 0;
+
 	if (sw->config.depth != 1)
 		return 0;
 
+	/*
+	 * If we are re-enabling then check if there is an active DMA
+	 * tunnel and in that case bail out.
+	 */
+	list_for_each_entry(tunnel, &tcm->tunnel_list, list) {
+		if (tb_tunnel_is_dma(tunnel)) {
+			if (tb_tunnel_port_on_path(tunnel, tb_upstream_port(sw)))
+				return 0;
+		}
+	}
+
 	/*
 	 * CL0s and CL1 are enabled and supported together.
 	 * Silently ignore CLx enabling in case CLx is not supported.
@@ -262,6 +282,16 @@ static int tb_enable_clx(struct tb_switch *sw)
 	return ret == -EOPNOTSUPP ? 0 : ret;
 }
 
+/* Disables CL states up to the host router */
+static void tb_disable_clx(struct tb_switch *sw)
+{
+	do {
+		if (tb_switch_clx_disable(sw) < 0)
+			tb_sw_warn(sw, "failed to disable CL states\n");
+		sw = tb_switch_parent(sw);
+	} while (sw);
+}
+
 static int tb_increase_switch_tmu_accuracy(struct device *dev, void *data)
 {
 	struct tb_switch *sw;
@@ -1470,30 +1500,45 @@ static int tb_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
 	struct tb_port *nhi_port, *dst_port;
 	struct tb_tunnel *tunnel;
 	struct tb_switch *sw;
+	int ret;
 
 	sw = tb_to_switch(xd->dev.parent);
 	dst_port = tb_port_at(xd->route, sw);
 	nhi_port = tb_switch_find_port(tb->root_switch, TB_TYPE_NHI);
 
 	mutex_lock(&tb->lock);
+
+	/*
+	 * When tunneling DMA paths the link should not enter CL states
+	 * so disable them now.
+	 */
+	tb_disable_clx(sw);
+
 	tunnel = tb_tunnel_alloc_dma(tb, nhi_port, dst_port, transmit_path,
 				     transmit_ring, receive_path, receive_ring);
 	if (!tunnel) {
-		mutex_unlock(&tb->lock);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto err_clx;
 	}
 
 	if (tb_tunnel_activate(tunnel)) {
 		tb_port_info(nhi_port,
 			     "DMA tunnel activation failed, aborting\n");
-		tb_tunnel_free(tunnel);
-		mutex_unlock(&tb->lock);
-		return -EIO;
+		ret = -EIO;
+		goto err_free;
 	}
 
 	list_add_tail(&tunnel->list, &tcm->tunnel_list);
 	mutex_unlock(&tb->lock);
 	return 0;
+
+err_free:
+	tb_tunnel_free(tunnel);
+err_clx:
+	tb_enable_clx(sw);
+	mutex_unlock(&tb->lock);
+
+	return ret;
 }
 
 static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
@@ -1519,6 +1564,13 @@ static void __tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
 					receive_path, receive_ring))
 			tb_deactivate_and_free_tunnel(tunnel);
 	}
+
+	/*
+	 * Try to re-enable CL states now, it is OK if this fails
+	 * because we may still have another DMA tunnel active through
+	 * the same host router USB4 downstream port.
+	 */
+	tb_enable_clx(sw);
 }
 
 static int tb_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 00/20] thunderbolt: Rework TMU and CLx support
  2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
                   ` (19 preceding siblings ...)
  2023-05-29 10:04 ` [PATCH 20/20] thunderbolt: Disable CL states when a DMA tunnel is established Mika Westerberg
@ 2023-06-09  9:09 ` Mika Westerberg
  20 siblings, 0 replies; 22+ messages in thread
From: Mika Westerberg @ 2023-06-09  9:09 UTC (permalink / raw)
  To: linux-usb
  Cc: Yehezkel Bernat, Michael Jamet, Lukas Wunner, Andreas Noever,
	Gil Fine

On Mon, May 29, 2023 at 01:04:05PM +0300, Mika Westerberg wrote:
> Hi all,
> 
> This series reworks the TMU and CLx support code to match better what we
> do elsewhere in the driver and prepares for USB4v2 adaptive TMU support
> that we are going to add in the subsequent series (I'm sending that out
> later this week). I've split this part as separate from USB4v2 support
> hoping that it makes reviewing them easier.
> 
> Gil Fine (1):
>   thunderbolt: Introduce tb_switch_downstream_port()
> 
> Mika Westerberg (19):
>   thunderbolt: Introduce tb_xdomain_downstream_port()
>   thunderbolt: Fix a couple of style issues in TMU code
>   thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled()
>   thunderbolt: Rework Titan Ridge TMU objection disable function
>   thunderbolt: Get rid of tb_switch_enable_tmu_1st_child()
>   thunderbolt: Move TMU configuration to tb_enable_tmu()
>   thunderbolt: Move tb_enable_tmu() close to other TMU functions
>   thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure()
>   thunderbolt: Move CLx support functions into clx.c
>   thunderbolt: Get rid of __tb_switch_[en|dis]able_clx()
>   thunderbolt: Move CLx enabling into tb_enable_clx()
>   thunderbolt: Switch CL states from enum to a bitmask
>   thunderbolt: Check for first depth router in tb.c
>   thunderbolt: Do not call CLx functions from TMU code
>   thunderbolt: Prefix TMU post time log message with "TMU: "
>   thunderbolt: Prefix CL state related log messages with "CLx: "
>   thunderbolt: Initialize CL states from the hardware
>   thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled
>   thunderbolt: Disable CL states when a DMA tunnel is established

All applied to thunderbolt.git/next.

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-06-09  9:15 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-29 10:04 [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg
2023-05-29 10:04 ` [PATCH 01/20] thunderbolt: Introduce tb_switch_downstream_port() Mika Westerberg
2023-05-29 10:04 ` [PATCH 02/20] thunderbolt: Introduce tb_xdomain_downstream_port() Mika Westerberg
2023-05-29 10:04 ` [PATCH 03/20] thunderbolt: Fix a couple of style issues in TMU code Mika Westerberg
2023-05-29 10:04 ` [PATCH 04/20] thunderbolt: Drop useless 'unidirectional' parameter from tb_switch_tmu_is_enabled() Mika Westerberg
2023-05-29 10:04 ` [PATCH 05/20] thunderbolt: Rework Titan Ridge TMU objection disable function Mika Westerberg
2023-05-29 10:04 ` [PATCH 06/20] thunderbolt: Get rid of tb_switch_enable_tmu_1st_child() Mika Westerberg
2023-05-29 10:04 ` [PATCH 07/20] thunderbolt: Move TMU configuration to tb_enable_tmu() Mika Westerberg
2023-05-29 10:04 ` [PATCH 08/20] thunderbolt: Move tb_enable_tmu() close to other TMU functions Mika Westerberg
2023-05-29 10:04 ` [PATCH 09/20] thunderbolt: Check valid TMU configuration in tb_switch_tmu_configure() Mika Westerberg
2023-05-29 10:04 ` [PATCH 10/20] thunderbolt: Move CLx support functions into clx.c Mika Westerberg
2023-05-29 10:04 ` [PATCH 11/20] thunderbolt: Get rid of __tb_switch_[en|dis]able_clx() Mika Westerberg
2023-05-29 10:04 ` [PATCH 12/20] thunderbolt: Move CLx enabling into tb_enable_clx() Mika Westerberg
2023-05-29 10:04 ` [PATCH 13/20] thunderbolt: Switch CL states from enum to a bitmask Mika Westerberg
2023-05-29 10:04 ` [PATCH 14/20] thunderbolt: Check for first depth router in tb.c Mika Westerberg
2023-05-29 10:04 ` [PATCH 15/20] thunderbolt: Do not call CLx functions from TMU code Mika Westerberg
2023-05-29 10:04 ` [PATCH 16/20] thunderbolt: Prefix TMU post time log message with "TMU: " Mika Westerberg
2023-05-29 10:04 ` [PATCH 17/20] thunderbolt: Prefix CL state related log messages with "CLx: " Mika Westerberg
2023-05-29 10:04 ` [PATCH 18/20] thunderbolt: Initialize CL states from the hardware Mika Westerberg
2023-05-29 10:04 ` [PATCH 19/20] thunderbolt: Make tb_switch_clx_disable() return CL states that were enabled Mika Westerberg
2023-05-29 10:04 ` [PATCH 20/20] thunderbolt: Disable CL states when a DMA tunnel is established Mika Westerberg
2023-06-09  9:09 ` [PATCH 00/20] thunderbolt: Rework TMU and CLx support Mika Westerberg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).