Linux USB
 help / color / mirror / Atom feed
* [PATCH v2 0/4] Prepwork for non-PCIe NHI/TBT hosts
@ 2026-04-28 18:49 Konrad Dybcio
  2026-04-28 18:49 ` [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Konrad Dybcio @ 2026-04-28 18:49 UTC (permalink / raw)
  To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
  Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
	Konrad Dybcio

Currently, the NHI driver (and other parts of the TBT framework) make
multiple assumptions about the host router being a PCIe device. This
series tries to decouple them by moving the 'struct pci_device' out of
the NHI code and introduce NHI-on-PCIe-specific abstractions where
necessary (with no functional change).

The intended usage of the new nhi_probe_common() is pretty similar to
other bus frameworks (I2C, SPI, USB..), i.e.:

static int foo_bar_probe() {
        // get SoC-specifc resources (clks, regulators..)

        // power things on

        // set some implementation-specific registers

        // register NHI and all the sub-devices
        ret = nhi_probe(&my_usb4->nhi)
        ...

        // cleanup boilerplate
}

Instead of the previously-suggested aux/fauxbus, the NHI device remains
the same 'struct dev' as the PCIe/platform/[...] device that provides
it. This is in line with some other buses and it makes things easier
from the PM perspective.

Tested on:
* Qualcomm X1E80100 CRD (OOT driver)
 * USB4 (Qualcomm controller)
 * Connected to a TBT3 ASUS ProArt 27 monitor
 * Parade PS8830 on-board retimer

Domain 0 Route 0: 0000:0000
Domain 0 Route 2: 0031:9000 ASUS-Display PA27AC

* Intel Coffee Lake NUC (NUC8i3BEK)
 * TBT3 (Alpine Ridge 2C 2016 controller)
 * Connected to a Dell TB16 dock (TBT active cable)
 * S3 + S2idle sleep

Domain 0 Route 0: 8086:6357 Intel Corporation NUC8BEB
Domain 0 Route 1: 00d4:b051 Dell Dell Thunderbolt Cable
Domain 0 Route 301: 00d4:b054 Dell Dell Thunderbolt Dock

* AMD Ryzen 7 PRO 7840U-based Lenovo ThinkPad T14s Gen 4
 * USB4 ("Pink Sardine" controller)
 * Connected to a Lenovo ThinkPad Thunderbolt 3 Dock
 * Parade PS8830 on-board retimer
 * Only S2idle is present on this platform

Domain 0 Route 0: 0000:0000
Domain 1 Route 0: 0000:0000
Domain 1 Route 2: 0108:1630 Lenovo ThinkPad Thunderbolt 3 Dock

Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
Changes in v2:
- Make 'struct tb_nhi_pci' private, strip it of the 'struct pci_dev
  field since it can be accessed via to_pci_dev(tb_nhi_pci->nhi.dev)
- Thin out patch 1, move some of its prior contents to patch 2
- Rename nhi_pci.[ch] to pci.[ch]
- Rename nhi_probe_common() to nhi_probe()
- Squash a number of bugs discovered at runtime on x86
- Add a patch to make ops necessary to drop boilerplate checks
- Reword the error messages introduced in the last patch
- Drop RFC/RFT tags
- Link to v1: https://lore.kernel.org/r/20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com

---
Konrad Dybcio (4):
      thunderbolt: Move pci_device out of tb_nhi
      thunderbolt: Separate out common NHI bits
      thunderbolt: Require nhi->ops be valid
      thunderbolt: Add some more descriptive probe error messages

 drivers/thunderbolt/Makefile    |   2 +-
 drivers/thunderbolt/acpi.c      |  14 +-
 drivers/thunderbolt/ctl.c       |  14 +-
 drivers/thunderbolt/domain.c    |   2 +-
 drivers/thunderbolt/eeprom.c    |   2 +-
 drivers/thunderbolt/icm.c       |  24 +-
 drivers/thunderbolt/nhi.c       | 513 +++++++---------------------------------
 drivers/thunderbolt/nhi.h       |  32 +++
 drivers/thunderbolt/nhi_ops.c   |  35 ++-
 drivers/thunderbolt/pci.c       | 507 +++++++++++++++++++++++++++++++++++++++
 drivers/thunderbolt/pci.h       |  19 ++
 drivers/thunderbolt/switch.c    |  41 +---
 drivers/thunderbolt/tb.c        |  76 +-----
 drivers/thunderbolt/tb.h        |  10 +-
 drivers/thunderbolt/usb4_port.c |   2 +-
 include/linux/thunderbolt.h     |   5 +-
 16 files changed, 724 insertions(+), 574 deletions(-)
---
base-commit: 936c21068d7ade00325e40d82bfd2f3f29d9f659
change-id: 20260309-topic-usb4_nonpcie_prepwork-86881f769b8f

Best regards,
-- 
Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi
  2026-04-28 18:49 [PATCH v2 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
@ 2026-04-28 18:49 ` Konrad Dybcio
  2026-05-04  6:40   ` Mika Westerberg
  2026-04-28 18:49 ` [PATCH v2 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Konrad Dybcio @ 2026-04-28 18:49 UTC (permalink / raw)
  To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
  Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
	Konrad Dybcio

From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>

Not all USB4/TB implementations are based on a PCIe-attached
controller. In order to make way for these, start off with moving the
pci_device reference out of the main tb_nhi structure.

Encapsulate the existing struct in a new tb_nhi_pci, that shall also
house all properties that relate to the parent bus. Similarly, any
other type of controller will be expected to contain tb_nhi as a
member.

Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
 drivers/thunderbolt/acpi.c      |  14 +--
 drivers/thunderbolt/ctl.c       |  14 +--
 drivers/thunderbolt/domain.c    |   2 +-
 drivers/thunderbolt/eeprom.c    |   2 +-
 drivers/thunderbolt/icm.c       |  24 ++---
 drivers/thunderbolt/nhi.c       | 212 ++++++++++++++++++++++++++++------------
 drivers/thunderbolt/nhi.h       |   1 +
 drivers/thunderbolt/nhi_ops.c   |  26 ++---
 drivers/thunderbolt/switch.c    |   6 +-
 drivers/thunderbolt/tb.c        |  69 -------------
 drivers/thunderbolt/tb.h        |  10 +-
 drivers/thunderbolt/usb4_port.c |   2 +-
 include/linux/thunderbolt.h     |   5 +-
 13 files changed, 209 insertions(+), 178 deletions(-)

diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
index 45d1415871b4..53546bc477a5 100644
--- a/drivers/thunderbolt/acpi.c
+++ b/drivers/thunderbolt/acpi.c
@@ -28,7 +28,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
 		return AE_OK;
 
 	/* It needs to reference this NHI */
-	if (dev_fwnode(&nhi->pdev->dev) != fwnode)
+	if (dev_fwnode(nhi->dev) != fwnode)
 		goto out_put;
 
 	/*
@@ -57,16 +57,16 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
 		 */
 		pm_runtime_get_sync(&pdev->dev);
 
-		link = device_link_add(&pdev->dev, &nhi->pdev->dev,
+		link = device_link_add(&pdev->dev, nhi->dev,
 				       DL_FLAG_AUTOREMOVE_SUPPLIER |
 				       DL_FLAG_RPM_ACTIVE |
 				       DL_FLAG_PM_RUNTIME);
 		if (link) {
-			dev_dbg(&nhi->pdev->dev, "created link from %s\n",
+			dev_dbg(nhi->dev, "created link from %s\n",
 				dev_name(&pdev->dev));
 			*(bool *)ret = true;
 		} else {
-			dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
+			dev_warn(nhi->dev, "device link creation from %s failed\n",
 				 dev_name(&pdev->dev));
 		}
 
@@ -93,7 +93,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
 	acpi_status status;
 	bool ret = false;
 
-	if (!has_acpi_companion(&nhi->pdev->dev))
+	if (!has_acpi_companion(nhi->dev))
 		return false;
 
 	/*
@@ -103,7 +103,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
 	status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 32,
 				     tb_acpi_add_link, NULL, nhi, (void **)&ret);
 	if (ACPI_FAILURE(status)) {
-		dev_warn(&nhi->pdev->dev, "failed to enumerate tunneled ports\n");
+		dev_warn(nhi->dev, "failed to enumerate tunneled ports\n");
 		return false;
 	}
 
@@ -305,7 +305,7 @@ static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
 		struct tb_nhi *nhi = sw->tb->nhi;
 		struct acpi_device *parent_adev;
 
-		parent_adev = ACPI_COMPANION(&nhi->pdev->dev);
+		parent_adev = ACPI_COMPANION(nhi->dev);
 		if (parent_adev)
 			adev = acpi_find_child_device(parent_adev, 0, false);
 	}
diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index b2fd60fc7bcc..a22c8c2a301d 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -56,22 +56,22 @@ struct tb_ctl {
 
 
 #define tb_ctl_WARN(ctl, format, arg...) \
-	dev_WARN(&(ctl)->nhi->pdev->dev, format, ## arg)
+	dev_WARN((ctl)->nhi->dev, format, ## arg)
 
 #define tb_ctl_err(ctl, format, arg...) \
-	dev_err(&(ctl)->nhi->pdev->dev, format, ## arg)
+	dev_err((ctl)->nhi->dev, format, ## arg)
 
 #define tb_ctl_warn(ctl, format, arg...) \
-	dev_warn(&(ctl)->nhi->pdev->dev, format, ## arg)
+	dev_warn((ctl)->nhi->dev, format, ## arg)
 
 #define tb_ctl_info(ctl, format, arg...) \
-	dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)
+	dev_info((ctl)->nhi->dev, format, ## arg)
 
 #define tb_ctl_dbg(ctl, format, arg...) \
-	dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg)
+	dev_dbg((ctl)->nhi->dev, format, ## arg)
 
 #define tb_ctl_dbg_once(ctl, format, arg...) \
-	dev_dbg_once(&(ctl)->nhi->pdev->dev, format, ## arg)
+	dev_dbg_once((ctl)->nhi->dev, format, ## arg)
 
 static DECLARE_WAIT_QUEUE_HEAD(tb_cfg_request_cancel_queue);
 /* Serializes access to request kref_get/put */
@@ -666,7 +666,7 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int index, int timeout_msec,
 
 	mutex_init(&ctl->request_queue_lock);
 	INIT_LIST_HEAD(&ctl->request_queue);
-	ctl->frame_pool = dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev,
+	ctl->frame_pool = dma_pool_create("thunderbolt_ctl", nhi->dev,
 					 TB_FRAME_SIZE, 4, 0);
 	if (!ctl->frame_pool)
 		goto err;
diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
index 317780b99992..8e332a9ad625 100644
--- a/drivers/thunderbolt/domain.c
+++ b/drivers/thunderbolt/domain.c
@@ -402,7 +402,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize
 	if (!tb->ctl)
 		goto err_destroy_wq;
 
-	tb->dev.parent = &nhi->pdev->dev;
+	tb->dev.parent = nhi->dev;
 	tb->dev.bus = &tb_bus_type;
 	tb->dev.type = &tb_domain_type;
 	tb->dev.groups = domain_attr_groups;
diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index 5477b9437048..5681c17f82ec 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -465,7 +465,7 @@ static void tb_switch_drom_free(struct tb_switch *sw)
  */
 static int tb_drom_copy_efi(struct tb_switch *sw, u16 *size)
 {
-	struct device *dev = &sw->tb->nhi->pdev->dev;
+	struct device *dev = sw->tb->nhi->dev;
 	int len, res;
 
 	len = device_property_count_u8(dev, "ThunderboltDROM");
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index 9d95bf3ab44c..97c33752a075 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -1455,6 +1455,7 @@ static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
 
 static bool icm_ar_is_supported(struct tb *tb)
 {
+	struct pci_dev *pdev = to_pci_dev(tb->nhi->dev);
 	struct pci_dev *upstream_port;
 	struct icm *icm = tb_priv(tb);
 
@@ -1472,7 +1473,7 @@ static bool icm_ar_is_supported(struct tb *tb)
 	 * Find the upstream PCIe port in case we need to do reset
 	 * through its vendor specific registers.
 	 */
-	upstream_port = get_upstream_port(tb->nhi->pdev);
+	upstream_port = get_upstream_port(pdev);
 	if (upstream_port) {
 		int cap;
 
@@ -1508,7 +1509,7 @@ static int icm_ar_get_mode(struct tb *tb)
 	} while (--retries);
 
 	if (!retries) {
-		dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n");
+		dev_err(nhi->dev, "ICM firmware not authenticated\n");
 		return -ENODEV;
 	}
 
@@ -1674,11 +1675,11 @@ icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level,
 
 static void icm_icl_set_uuid(struct tb *tb)
 {
-	struct tb_nhi *nhi = tb->nhi;
+	struct pci_dev *pdev = to_pci_dev(tb->nhi->dev);
 	u32 uuid[4];
 
-	pci_read_config_dword(nhi->pdev, VS_CAP_10, &uuid[0]);
-	pci_read_config_dword(nhi->pdev, VS_CAP_11, &uuid[1]);
+	pci_read_config_dword(pdev, VS_CAP_10, &uuid[0]);
+	pci_read_config_dword(pdev, VS_CAP_11, &uuid[1]);
 	uuid[2] = 0xffffffff;
 	uuid[3] = 0xffffffff;
 
@@ -1853,7 +1854,7 @@ static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
 	if (icm_firmware_running(nhi))
 		return 0;
 
-	dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n");
+	dev_dbg(nhi->dev, "starting ICM firmware\n");
 
 	ret = icm_firmware_reset(tb, nhi);
 	if (ret)
@@ -1948,7 +1949,7 @@ static int icm_firmware_init(struct tb *tb)
 
 	ret = icm_firmware_start(tb, nhi);
 	if (ret) {
-		dev_err(&nhi->pdev->dev, "could not start ICM firmware\n");
+		dev_err(nhi->dev, "could not start ICM firmware\n");
 		return ret;
 	}
 
@@ -1980,10 +1981,10 @@ static int icm_firmware_init(struct tb *tb)
 	 */
 	ret = icm_reset_phy_port(tb, 0);
 	if (ret)
-		dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n");
+		dev_warn(nhi->dev, "failed to reset links on port0\n");
 	ret = icm_reset_phy_port(tb, 1);
 	if (ret)
-		dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n");
+		dev_warn(nhi->dev, "failed to reset links on port1\n");
 
 	return 0;
 }
@@ -2462,6 +2463,7 @@ static const struct tb_cm_ops icm_icl_ops = {
 
 struct tb *icm_probe(struct tb_nhi *nhi)
 {
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	struct icm *icm;
 	struct tb *tb;
 
@@ -2473,7 +2475,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
 	INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work);
 	mutex_init(&icm->request_lock);
 
-	switch (nhi->pdev->device) {
+	switch (pdev->device) {
 	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
 	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
 		icm->can_upgrade_nvm = true;
@@ -2579,7 +2581,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
 	}
 
 	if (!icm->is_supported || !icm->is_supported(tb)) {
-		dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n");
+		dev_dbg(nhi->dev, "ICM not supported on this controller\n");
 		tb_domain_put(tb);
 		return NULL;
 	}
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 2bb2e79ca3cb..2d01e698dd65 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -2,7 +2,7 @@
 /*
  * Thunderbolt driver - NHI driver
  *
- * The NHI (native host interface) is the pci device that allows us to send and
+ * The NHI (native host interface) is the device that allows us to send and
  * receive frames from the thunderbolt bus.
  *
  * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
@@ -12,12 +12,12 @@
 #include <linux/pm_runtime.h>
 #include <linux/slab.h>
 #include <linux/errno.h>
-#include <linux/pci.h>
 #include <linux/dma-mapping.h>
 #include <linux/interrupt.h>
 #include <linux/iommu.h>
 #include <linux/module.h>
 #include <linux/delay.h>
+#include <linux/platform_data/x86/apple.h>
 #include <linux/property.h>
 #include <linux/string_choices.h>
 #include <linux/string_helpers.h>
@@ -51,6 +51,16 @@ static bool host_reset = true;
 module_param(host_reset, bool, 0444);
 MODULE_PARM_DESC(host_reset, "reset USB4 host router (default: true)");
 
+struct tb_nhi_pci {
+	struct tb_nhi nhi;
+	struct ida msix_ida;
+};
+
+static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
+{
+	return container_of(nhi, struct tb_nhi_pci, nhi);
+}
+
 static int ring_interrupt_index(const struct tb_ring *ring)
 {
 	int bit = ring->hop;
@@ -139,12 +149,12 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
 	else
 		new = old & ~mask;
 
-	dev_dbg(&ring->nhi->pdev->dev,
+	dev_dbg(ring->nhi->dev,
 		"%s interrupt at register %#x bit %d (%#x -> %#x)\n",
 		active ? "enabling" : "disabling", reg, interrupt_bit, old, new);
 
 	if (new == old)
-		dev_WARN(&ring->nhi->pdev->dev,
+		dev_WARN(ring->nhi->dev,
 					 "interrupt for %s %d is already %s\n",
 					 RING_TYPE(ring), ring->hop,
 					 str_enabled_disabled(active));
@@ -462,19 +472,21 @@ static irqreturn_t ring_msix(int irq, void *data)
 static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
 {
 	struct tb_nhi *nhi = ring->nhi;
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	unsigned long irqflags;
 	int ret;
 
-	if (!nhi->pdev->msix_enabled)
+	if (!pdev->msix_enabled)
 		return 0;
 
-	ret = ida_alloc_max(&nhi->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
+	ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
 	if (ret < 0)
 		return ret;
 
 	ring->vector = ret;
 
-	ret = pci_irq_vector(ring->nhi->pdev, ring->vector);
+	ret = pci_irq_vector(pdev, ring->vector);
 	if (ret < 0)
 		goto err_ida_remove;
 
@@ -488,18 +500,20 @@ static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
 	return 0;
 
 err_ida_remove:
-	ida_free(&nhi->msix_ida, ring->vector);
+	ida_free(&nhi_pci->msix_ida, ring->vector);
 
 	return ret;
 }
 
 static void ring_release_msix(struct tb_ring *ring)
 {
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
+
 	if (ring->irq <= 0)
 		return;
 
 	free_irq(ring->irq, ring);
-	ida_free(&ring->nhi->msix_ida, ring->vector);
+	ida_free(&nhi_pci->msix_ida, ring->vector);
 	ring->vector = 0;
 	ring->irq = 0;
 }
@@ -512,7 +526,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
 	if (nhi->quirks & QUIRK_E2E) {
 		start_hop = RING_FIRST_USABLE_HOPID + 1;
 		if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
-			dev_dbg(&nhi->pdev->dev, "quirking E2E TX HopID %u -> %u\n",
+			dev_dbg(nhi->dev, "quirking E2E TX HopID %u -> %u\n",
 				ring->e2e_tx_hop, RING_E2E_RESERVED_HOPID);
 			ring->e2e_tx_hop = RING_E2E_RESERVED_HOPID;
 		}
@@ -543,23 +557,23 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
 	}
 
 	if (ring->hop > 0 && ring->hop < start_hop) {
-		dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
+		dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop);
 		ret = -EINVAL;
 		goto err_unlock;
 	}
 	if (ring->hop < 0 || ring->hop >= nhi->hop_count) {
-		dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
+		dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop);
 		ret = -EINVAL;
 		goto err_unlock;
 	}
 	if (ring->is_tx && nhi->tx_rings[ring->hop]) {
-		dev_warn(&nhi->pdev->dev, "TX hop %d already allocated\n",
+		dev_warn(nhi->dev, "TX hop %d already allocated\n",
 			 ring->hop);
 		ret = -EBUSY;
 		goto err_unlock;
 	}
 	if (!ring->is_tx && nhi->rx_rings[ring->hop]) {
-		dev_warn(&nhi->pdev->dev, "RX hop %d already allocated\n",
+		dev_warn(nhi->dev, "RX hop %d already allocated\n",
 			 ring->hop);
 		ret = -EBUSY;
 		goto err_unlock;
@@ -584,7 +598,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 {
 	struct tb_ring *ring = NULL;
 
-	dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
+	dev_dbg(nhi->dev, "allocating %s ring %d of size %d\n",
 		transmit ? "TX" : "RX", hop, size);
 
 	ring = kzalloc_obj(*ring);
@@ -610,7 +624,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 	ring->start_poll = start_poll;
 	ring->poll_data = poll_data;
 
-	ring->descriptors = dma_alloc_coherent(&ring->nhi->pdev->dev,
+	ring->descriptors = dma_alloc_coherent(ring->nhi->dev,
 			size * sizeof(*ring->descriptors),
 			&ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO);
 	if (!ring->descriptors)
@@ -627,7 +641,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 err_release_msix:
 	ring_release_msix(ring);
 err_free_descs:
-	dma_free_coherent(&ring->nhi->pdev->dev,
+	dma_free_coherent(ring->nhi->dev,
 			  ring->size * sizeof(*ring->descriptors),
 			  ring->descriptors, ring->descriptors_dma);
 err_free_ring:
@@ -694,10 +708,10 @@ void tb_ring_start(struct tb_ring *ring)
 	if (ring->nhi->going_away)
 		goto err;
 	if (ring->running) {
-		dev_WARN(&ring->nhi->pdev->dev, "ring already started\n");
+		dev_WARN(ring->nhi->dev, "ring already started\n");
 		goto err;
 	}
-	dev_dbg(&ring->nhi->pdev->dev, "starting %s %d\n",
+	dev_dbg(ring->nhi->dev, "starting %s %d\n",
 		RING_TYPE(ring), ring->hop);
 
 	if (ring->flags & RING_FLAG_FRAME) {
@@ -734,11 +748,11 @@ void tb_ring_start(struct tb_ring *ring)
 			hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
 			flags |= hop;
 
-			dev_dbg(&ring->nhi->pdev->dev,
+			dev_dbg(ring->nhi->dev,
 				"enabling E2E for %s %d with TX HopID %d\n",
 				RING_TYPE(ring), ring->hop, ring->e2e_tx_hop);
 		} else {
-			dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n",
+			dev_dbg(ring->nhi->dev, "enabling E2E for %s %d\n",
 				RING_TYPE(ring), ring->hop);
 		}
 
@@ -772,12 +786,12 @@ void tb_ring_stop(struct tb_ring *ring)
 {
 	spin_lock_irq(&ring->nhi->lock);
 	spin_lock(&ring->lock);
-	dev_dbg(&ring->nhi->pdev->dev, "stopping %s %d\n",
+	dev_dbg(ring->nhi->dev, "stopping %s %d\n",
 		RING_TYPE(ring), ring->hop);
 	if (ring->nhi->going_away)
 		goto err;
 	if (!ring->running) {
-		dev_WARN(&ring->nhi->pdev->dev, "%s %d already stopped\n",
+		dev_WARN(ring->nhi->dev, "%s %d already stopped\n",
 			 RING_TYPE(ring), ring->hop);
 		goto err;
 	}
@@ -826,14 +840,14 @@ void tb_ring_free(struct tb_ring *ring)
 		ring->nhi->rx_rings[ring->hop] = NULL;
 
 	if (ring->running) {
-		dev_WARN(&ring->nhi->pdev->dev, "%s %d still running\n",
+		dev_WARN(ring->nhi->dev, "%s %d still running\n",
 			 RING_TYPE(ring), ring->hop);
 	}
 	spin_unlock_irq(&ring->nhi->lock);
 
 	ring_release_msix(ring);
 
-	dma_free_coherent(&ring->nhi->pdev->dev,
+	dma_free_coherent(ring->nhi->dev,
 			  ring->size * sizeof(*ring->descriptors),
 			  ring->descriptors, ring->descriptors_dma);
 
@@ -841,7 +855,7 @@ void tb_ring_free(struct tb_ring *ring)
 	ring->descriptors_dma = 0;
 
 
-	dev_dbg(&ring->nhi->pdev->dev, "freeing %s %d\n", RING_TYPE(ring),
+	dev_dbg(ring->nhi->dev, "freeing %s %d\n", RING_TYPE(ring),
 		ring->hop);
 
 	/*
@@ -940,7 +954,7 @@ static void nhi_interrupt_work(struct work_struct *work)
 		if ((value & (1 << (bit % 32))) == 0)
 			continue;
 		if (type == 2) {
-			dev_warn(&nhi->pdev->dev,
+			dev_warn(nhi->dev,
 				 "RX overflow for ring %d\n",
 				 hop);
 			continue;
@@ -950,7 +964,7 @@ static void nhi_interrupt_work(struct work_struct *work)
 		else
 			ring = nhi->rx_rings[hop];
 		if (ring == NULL) {
-			dev_warn(&nhi->pdev->dev,
+			dev_warn(nhi->dev,
 				 "got interrupt for inactive %s ring %d\n",
 				 type ? "RX" : "TX",
 				 hop);
@@ -1139,16 +1153,18 @@ static int nhi_runtime_resume(struct device *dev)
 
 static void nhi_shutdown(struct tb_nhi *nhi)
 {
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	int i;
 
-	dev_dbg(&nhi->pdev->dev, "shutdown\n");
+	dev_dbg(nhi->dev, "shutdown\n");
 
 	for (i = 0; i < nhi->hop_count; i++) {
 		if (nhi->tx_rings[i])
-			dev_WARN(&nhi->pdev->dev,
+			dev_WARN(nhi->dev,
 				 "TX ring %d is still active\n", i);
 		if (nhi->rx_rings[i])
-			dev_WARN(&nhi->pdev->dev,
+			dev_WARN(nhi->dev,
 				 "RX ring %d is still active\n", i);
 	}
 	nhi_disable_interrupts(nhi);
@@ -1156,19 +1172,22 @@ static void nhi_shutdown(struct tb_nhi *nhi)
 	 * We have to release the irq before calling flush_work. Otherwise an
 	 * already executing IRQ handler could call schedule_work again.
 	 */
-	if (!nhi->pdev->msix_enabled) {
-		devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi);
+	if (!pdev->msix_enabled) {
+		devm_free_irq(nhi->dev, pdev->irq, nhi);
 		flush_work(&nhi->interrupt_work);
 	}
-	ida_destroy(&nhi->msix_ida);
+	ida_destroy(&nhi_pci->msix_ida);
 
 	if (nhi->ops && nhi->ops->shutdown)
 		nhi->ops->shutdown(nhi);
 }
 
-static void nhi_check_quirks(struct tb_nhi *nhi)
+static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci)
 {
-	if (nhi->pdev->vendor == PCI_VENDOR_ID_INTEL) {
+	struct tb_nhi *nhi = &nhi_pci->nhi;
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+
+	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
 		/*
 		 * Intel hardware supports auto clear of the interrupt
 		 * status register right after interrupt is being
@@ -1176,7 +1195,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
 		 */
 		nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
 
-		switch (nhi->pdev->device) {
+		switch (pdev->device) {
 		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
 		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
 			/*
@@ -1190,7 +1209,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
 	}
 }
 
-static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
+static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data)
 {
 	if (!pdev->external_facing ||
 	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
@@ -1199,9 +1218,11 @@ static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
 	return 1; /* Stop walking */
 }
 
-static void nhi_check_iommu(struct tb_nhi *nhi)
+static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci)
 {
-	struct pci_bus *bus = nhi->pdev->bus;
+	struct tb_nhi *nhi = &nhi_pci->nhi;
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+	struct pci_bus *bus = pdev->bus;
 	bool port_ok = false;
 
 	/*
@@ -1224,10 +1245,10 @@ static void nhi_check_iommu(struct tb_nhi *nhi)
 	while (bus->parent)
 		bus = bus->parent;
 
-	pci_walk_bus(bus, nhi_check_iommu_pdev, &port_ok);
+	pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok);
 
 	nhi->iommu_dma_protection = port_ok;
-	dev_dbg(&nhi->pdev->dev, "IOMMU DMA protection is %s\n",
+	dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
 		str_enabled_disabled(port_ok));
 }
 
@@ -1242,7 +1263,7 @@ static void nhi_reset(struct tb_nhi *nhi)
 		return;
 
 	if (!host_reset) {
-		dev_dbg(&nhi->pdev->dev, "skipping host router reset\n");
+		dev_dbg(nhi->dev, "skipping host router reset\n");
 		return;
 	}
 
@@ -1253,27 +1274,23 @@ static void nhi_reset(struct tb_nhi *nhi)
 	do {
 		val = ioread32(nhi->iobase + REG_RESET);
 		if (!(val & REG_RESET_HRR)) {
-			dev_warn(&nhi->pdev->dev, "host router reset successful\n");
+			dev_warn(nhi->dev, "host router reset successful\n");
 			return;
 		}
 		usleep_range(10, 20);
 	} while (ktime_before(ktime_get(), timeout));
 
-	dev_warn(&nhi->pdev->dev, "timeout resetting host router\n");
+	dev_warn(nhi->dev, "timeout resetting host router\n");
 }
 
-static int nhi_init_msi(struct tb_nhi *nhi)
+static int nhi_init_msi(struct tb_nhi_pci *nhi_pci)
 {
-	struct pci_dev *pdev = nhi->pdev;
+	struct tb_nhi *nhi = &nhi_pci->nhi;
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	struct device *dev = &pdev->dev;
 	int res, irq, nvec;
 
-	/* In case someone left them on. */
-	nhi_disable_interrupts(nhi);
-
-	nhi_enable_int_throttling(nhi);
-
-	ida_init(&nhi->msix_ida);
+	ida_init(&nhi_pci->msix_ida);
 
 	/*
 	 * The NHI has 16 MSI-X vectors or a single MSI. We first try to
@@ -1290,7 +1307,7 @@ static int nhi_init_msi(struct tb_nhi *nhi)
 
 		INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
 
-		irq = pci_irq_vector(nhi->pdev, 0);
+		irq = pci_irq_vector(pdev, 0);
 		if (irq < 0)
 			return irq;
 
@@ -1339,6 +1356,7 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi)
 static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 {
 	struct device *dev = &pdev->dev;
+	struct tb_nhi_pci *nhi_pci;
 	struct tb_nhi *nhi;
 	struct tb *tb;
 	int res;
@@ -1350,11 +1368,12 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (res)
 		return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
 
-	nhi = devm_kzalloc(&pdev->dev, sizeof(*nhi), GFP_KERNEL);
-	if (!nhi)
+	nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
+	if (!nhi_pci)
 		return -ENOMEM;
 
-	nhi->pdev = pdev;
+	nhi = &nhi_pci->nhi;
+	nhi->dev = dev;
 	nhi->ops = (const struct tb_nhi_ops *)id->driver_data;
 
 	nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
@@ -1372,11 +1391,15 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	if (!nhi->tx_rings || !nhi->rx_rings)
 		return -ENOMEM;
 
-	nhi_check_quirks(nhi);
-	nhi_check_iommu(nhi);
+	nhi_check_quirks(nhi_pci);
+	nhi_check_iommu(nhi_pci);
 	nhi_reset(nhi);
 
-	res = nhi_init_msi(nhi);
+	/* In case someone left them on. */
+	nhi_disable_interrupts(nhi);
+	nhi_enable_int_throttling(nhi);
+
+	res = nhi_init_msi(nhi_pci);
 	if (res)
 		return dev_err_probe(dev, res, "cannot enable MSI, aborting\n");
 
@@ -1458,6 +1481,75 @@ static const struct dev_pm_ops nhi_pm_ops = {
 	.runtime_resume = nhi_runtime_resume,
 };
 
+/*
+ * During suspend the Thunderbolt controller is reset and all PCIe
+ * tunnels are lost. The NHI driver will try to reestablish all tunnels
+ * during resume. This adds device links between the tunneled PCIe
+ * downstream ports and the NHI so that the device core will make sure
+ * NHI is resumed first before the rest.
+ */
+bool tb_apple_add_links(struct tb_nhi *nhi)
+{
+	struct pci_dev *nhi_pdev = to_pci_dev(nhi->dev);
+	struct pci_dev *upstream, *pdev;
+	bool ret;
+
+	if (!x86_apple_machine)
+		return false;
+
+	switch (nhi_pdev->device) {
+	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
+	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
+	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
+	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
+		break;
+	default:
+		return false;
+	}
+
+	upstream = pci_upstream_bridge(nhi_pdev);
+	while (upstream) {
+		if (!pci_is_pcie(upstream))
+			return false;
+		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
+			break;
+		upstream = pci_upstream_bridge(upstream);
+	}
+
+	if (!upstream)
+		return false;
+
+	/*
+	 * For each hotplug downstream port, create add device link
+	 * back to NHI so that PCIe tunnels can be re-established after
+	 * sleep.
+	 */
+	ret = false;
+	for_each_pci_bridge(pdev, upstream->subordinate) {
+		const struct device_link *link;
+
+		if (!pci_is_pcie(pdev))
+			continue;
+		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
+		    !pdev->is_pciehp)
+			continue;
+
+		link = device_link_add(&pdev->dev, nhi->dev,
+				       DL_FLAG_AUTOREMOVE_SUPPLIER |
+				       DL_FLAG_PM_RUNTIME);
+		if (link) {
+			dev_dbg(nhi->dev, "created link from %s\n",
+				dev_name(&pdev->dev));
+			ret = true;
+		} else {
+			dev_warn(nhi->dev, "device link creation from %s failed\n",
+				 dev_name(&pdev->dev));
+		}
+	}
+
+	return ret;
+}
+
 static struct pci_device_id nhi_ids[] = {
 	/*
 	 * We have to specify class, the TB bridges use the same device and
diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 24ac4246d0ca..efcd119e26f8 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -29,6 +29,7 @@ enum nhi_mailbox_cmd {
 
 int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data);
 enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi);
+bool tb_apple_add_links(struct tb_nhi *nhi);
 
 /**
  * struct tb_nhi_ops - NHI specific optional operations
diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c
index 96da07e88c52..8c50066f3411 100644
--- a/drivers/thunderbolt/nhi_ops.c
+++ b/drivers/thunderbolt/nhi_ops.c
@@ -24,7 +24,7 @@ static int check_for_device(struct device *dev, void *data)
 
 static bool icl_nhi_is_device_connected(struct tb_nhi *nhi)
 {
-	struct tb *tb = pci_get_drvdata(nhi->pdev);
+	struct tb *tb = dev_get_drvdata(nhi->dev);
 	int ret;
 
 	ret = device_for_each_child(&tb->root_switch->dev, NULL,
@@ -34,6 +34,7 @@ static bool icl_nhi_is_device_connected(struct tb_nhi *nhi)
 
 static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
 {
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	u32 vs_cap;
 
 	/*
@@ -48,7 +49,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
 	 * The actual power management happens inside shared ACPI power
 	 * resources using standard ACPI methods.
 	 */
-	pci_read_config_dword(nhi->pdev, VS_CAP_22, &vs_cap);
+	pci_read_config_dword(pdev, VS_CAP_22, &vs_cap);
 	if (power) {
 		vs_cap &= ~VS_CAP_22_DMA_DELAY_MASK;
 		vs_cap |= 0x22 << VS_CAP_22_DMA_DELAY_SHIFT;
@@ -56,7 +57,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
 	} else {
 		vs_cap &= ~VS_CAP_22_FORCE_POWER;
 	}
-	pci_write_config_dword(nhi->pdev, VS_CAP_22, vs_cap);
+	pci_write_config_dword(pdev, VS_CAP_22, vs_cap);
 
 	if (power) {
 		unsigned int retries = 350;
@@ -64,7 +65,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
 
 		/* Wait until the firmware tells it is up and running */
 		do {
-			pci_read_config_dword(nhi->pdev, VS_CAP_9, &val);
+			pci_read_config_dword(pdev, VS_CAP_9, &val);
 			if (val & VS_CAP_9_FW_READY)
 				return 0;
 			usleep_range(3000, 3100);
@@ -78,14 +79,16 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
 
 static void icl_nhi_lc_mailbox_cmd(struct tb_nhi *nhi, enum icl_lc_mailbox_cmd cmd)
 {
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	u32 data;
 
 	data = (cmd << VS_CAP_19_CMD_SHIFT) & VS_CAP_19_CMD_MASK;
-	pci_write_config_dword(nhi->pdev, VS_CAP_19, data | VS_CAP_19_VALID);
+	pci_write_config_dword(pdev, VS_CAP_19, data | VS_CAP_19_VALID);
 }
 
 static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
 {
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	unsigned long end;
 	u32 data;
 
@@ -94,7 +97,7 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
 
 	end = jiffies + msecs_to_jiffies(timeout);
 	do {
-		pci_read_config_dword(nhi->pdev, VS_CAP_18, &data);
+		pci_read_config_dword(pdev, VS_CAP_18, &data);
 		if (data & VS_CAP_18_DONE)
 			goto clear;
 		usleep_range(1000, 1100);
@@ -104,24 +107,25 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
 
 clear:
 	/* Clear the valid bit */
-	pci_write_config_dword(nhi->pdev, VS_CAP_19, 0);
+	pci_write_config_dword(pdev, VS_CAP_19, 0);
 	return 0;
 }
 
 static void icl_nhi_set_ltr(struct tb_nhi *nhi)
 {
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	u32 max_ltr, ltr;
 
-	pci_read_config_dword(nhi->pdev, VS_CAP_16, &max_ltr);
+	pci_read_config_dword(pdev, VS_CAP_16, &max_ltr);
 	max_ltr &= 0xffff;
 	/* Program the same value for both snoop and no-snoop */
 	ltr = max_ltr << 16 | max_ltr;
-	pci_write_config_dword(nhi->pdev, VS_CAP_15, ltr);
+	pci_write_config_dword(pdev, VS_CAP_15, ltr);
 }
 
 static int icl_nhi_suspend(struct tb_nhi *nhi)
 {
-	struct tb *tb = pci_get_drvdata(nhi->pdev);
+	struct tb *tb = dev_get_drvdata(nhi->dev);
 	int ret;
 
 	if (icl_nhi_is_device_connected(nhi))
@@ -144,7 +148,7 @@ static int icl_nhi_suspend(struct tb_nhi *nhi)
 
 static int icl_nhi_suspend_noirq(struct tb_nhi *nhi, bool wakeup)
 {
-	struct tb *tb = pci_get_drvdata(nhi->pdev);
+	struct tb *tb = dev_get_drvdata(nhi->dev);
 	enum icl_lc_mailbox_cmd cmd;
 
 	if (!pm_suspend_via_firmware())
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index c2ad58b19e7b..0680209e349c 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -211,6 +211,7 @@ static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
 
 static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
 {
+	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
 	struct pci_dev *root_port;
 
 	/*
@@ -219,16 +220,17 @@ static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
 	 * itself. To be on the safe side keep the root port in D0 during
 	 * the whole upgrade process.
 	 */
-	root_port = pcie_find_root_port(sw->tb->nhi->pdev);
+	root_port = pcie_find_root_port(pdev);
 	if (root_port)
 		pm_runtime_get_noresume(&root_port->dev);
 }
 
 static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
 {
+	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
 	struct pci_dev *root_port;
 
-	root_port = pcie_find_root_port(sw->tb->nhi->pdev);
+	root_port = pcie_find_root_port(pdev);
 	if (root_port)
 		pm_runtime_put(&root_port->dev);
 }
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index c69c323e6952..0126e38d9396 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -10,7 +10,6 @@
 #include <linux/errno.h>
 #include <linux/delay.h>
 #include <linux/pm_runtime.h>
-#include <linux/platform_data/x86/apple.h>
 
 #include "tb.h"
 #include "tb_regs.h"
@@ -3295,74 +3294,6 @@ static const struct tb_cm_ops tb_cm_ops = {
 	.disconnect_xdomain_paths = tb_disconnect_xdomain_paths,
 };
 
-/*
- * During suspend the Thunderbolt controller is reset and all PCIe
- * tunnels are lost. The NHI driver will try to reestablish all tunnels
- * during resume. This adds device links between the tunneled PCIe
- * downstream ports and the NHI so that the device core will make sure
- * NHI is resumed first before the rest.
- */
-static bool tb_apple_add_links(struct tb_nhi *nhi)
-{
-	struct pci_dev *upstream, *pdev;
-	bool ret;
-
-	if (!x86_apple_machine)
-		return false;
-
-	switch (nhi->pdev->device) {
-	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
-	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
-	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
-	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
-		break;
-	default:
-		return false;
-	}
-
-	upstream = pci_upstream_bridge(nhi->pdev);
-	while (upstream) {
-		if (!pci_is_pcie(upstream))
-			return false;
-		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
-			break;
-		upstream = pci_upstream_bridge(upstream);
-	}
-
-	if (!upstream)
-		return false;
-
-	/*
-	 * For each hotplug downstream port, create add device link
-	 * back to NHI so that PCIe tunnels can be re-established after
-	 * sleep.
-	 */
-	ret = false;
-	for_each_pci_bridge(pdev, upstream->subordinate) {
-		const struct device_link *link;
-
-		if (!pci_is_pcie(pdev))
-			continue;
-		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
-		    !pdev->is_pciehp)
-			continue;
-
-		link = device_link_add(&pdev->dev, &nhi->pdev->dev,
-				       DL_FLAG_AUTOREMOVE_SUPPLIER |
-				       DL_FLAG_PM_RUNTIME);
-		if (link) {
-			dev_dbg(&nhi->pdev->dev, "created link from %s\n",
-				dev_name(&pdev->dev));
-			ret = true;
-		} else {
-			dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
-				 dev_name(&pdev->dev));
-		}
-	}
-
-	return ret;
-}
-
 struct tb *tb_probe(struct tb_nhi *nhi)
 {
 	struct tb_cm *tcm;
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 217c3114bec8..4e11060e144b 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -725,11 +725,11 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
 			    length);
 }
 
-#define tb_err(tb, fmt, arg...) dev_err(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_WARN(tb, fmt, arg...) dev_WARN(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_warn(tb, fmt, arg...) dev_warn(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_info(tb, fmt, arg...) dev_info(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_dbg(tb, fmt, arg...) dev_dbg(&(tb)->nhi->pdev->dev, fmt, ## arg)
+#define tb_err(tb, fmt, arg...) dev_err((tb)->nhi->dev, fmt, ## arg)
+#define tb_WARN(tb, fmt, arg...) dev_WARN((tb)->nhi->dev, fmt, ## arg)
+#define tb_warn(tb, fmt, arg...) dev_warn((tb)->nhi->dev, fmt, ## arg)
+#define tb_info(tb, fmt, arg...) dev_info((tb)->nhi->dev, fmt, ## arg)
+#define tb_dbg(tb, fmt, arg...) dev_dbg((tb)->nhi->dev, fmt, ## arg)
 
 #define __TB_SW_PRINT(level, sw, fmt, arg...)           \
 	do {                                            \
diff --git a/drivers/thunderbolt/usb4_port.c b/drivers/thunderbolt/usb4_port.c
index c32d3516e780..890de530debc 100644
--- a/drivers/thunderbolt/usb4_port.c
+++ b/drivers/thunderbolt/usb4_port.c
@@ -138,7 +138,7 @@ bool usb4_usb3_port_match(struct device *usb4_port_dev,
 		return false;
 
 	/* Check if USB3 fwnode references same NHI where USB4 port resides */
-	if (!device_match_fwnode(&nhi->pdev->dev, nhi_fwnode))
+	if (!device_match_fwnode(nhi->dev, nhi_fwnode))
 		return false;
 
 	if (fwnode_property_read_u8(usb3_port_fwnode, "usb4-port-number", &usb4_port_num))
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index 0ba112175bb3..789cd7f364e1 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -496,12 +496,11 @@ static inline struct tb_xdomain *tb_service_parent(struct tb_service *svc)
  */
 struct tb_nhi {
 	spinlock_t lock;
-	struct pci_dev *pdev;
+	struct device *dev;
 	const struct tb_nhi_ops *ops;
 	void __iomem *iobase;
 	struct tb_ring **tx_rings;
 	struct tb_ring **rx_rings;
-	struct ida msix_ida;
 	bool going_away;
 	bool iommu_dma_protection;
 	struct work_struct interrupt_work;
@@ -681,7 +680,7 @@ void tb_ring_poll_complete(struct tb_ring *ring);
  */
 static inline struct device *tb_ring_dma_device(struct tb_ring *ring)
 {
-	return &ring->nhi->pdev->dev;
+	return ring->nhi->dev;
 }
 
 bool usb4_usb3_port_match(struct device *usb4_port_dev,

-- 
2.54.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-04-28 18:49 [PATCH v2 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
  2026-04-28 18:49 ` [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
@ 2026-04-28 18:49 ` Konrad Dybcio
  2026-05-04  6:54   ` Mika Westerberg
  2026-04-28 18:49 ` [PATCH v2 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
  2026-04-28 18:49 ` [PATCH v2 4/4] thunderbolt: Add some more descriptive probe error messages Konrad Dybcio
  3 siblings, 1 reply; 13+ messages in thread
From: Konrad Dybcio @ 2026-04-28 18:49 UTC (permalink / raw)
  To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
  Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
	Konrad Dybcio

From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>

Add a new file encapsulating most of the PCI NHI specifics
(intentionally leaving some odd cookies behind to make the layering
simpler). Most notably, separate out nhi_probe() to make it easier to
register other types of NHIs.

Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
 drivers/thunderbolt/Makefile  |   2 +-
 drivers/thunderbolt/nhi.c     | 527 ++++--------------------------------------
 drivers/thunderbolt/nhi.h     |  31 +++
 drivers/thunderbolt/nhi_ops.c |   9 +
 drivers/thunderbolt/pci.c     | 507 ++++++++++++++++++++++++++++++++++++++++
 drivers/thunderbolt/pci.h     |  19 ++
 drivers/thunderbolt/switch.c  |  43 +---
 7 files changed, 626 insertions(+), 512 deletions(-)

diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index b44b32dcb832..eb1bfc5e5c3c 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,7 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 ccflags-y := -I$(src)
 obj-${CONFIG_USB4} := thunderbolt.o
-thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
+thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o pci.o path.o tunnel.o eeprom.o
 thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
 thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o clx.o
 
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 2d01e698dd65..3ceca434155d 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -17,7 +17,6 @@
 #include <linux/iommu.h>
 #include <linux/module.h>
 #include <linux/delay.h>
-#include <linux/platform_data/x86/apple.h>
 #include <linux/property.h>
 #include <linux/string_choices.h>
 #include <linux/string_helpers.h>
@@ -34,33 +33,13 @@
  * transferred.
  */
 #define RING_E2E_RESERVED_HOPID	RING_FIRST_USABLE_HOPID
-/*
- * Minimal number of vectors when we use MSI-X. Two for control channel
- * Rx/Tx and the rest four are for cross domain DMA paths.
- */
-#define MSIX_MIN_VECS		6
-#define MSIX_MAX_VECS		16
 
 #define NHI_MAILBOX_TIMEOUT	500 /* ms */
 
-/* Host interface quirks */
-#define QUIRK_AUTO_CLEAR_INT	BIT(0)
-#define QUIRK_E2E		BIT(1)
-
 static bool host_reset = true;
 module_param(host_reset, bool, 0444);
 MODULE_PARM_DESC(host_reset, "reset USB4 host router (default: true)");
 
-struct tb_nhi_pci {
-	struct tb_nhi nhi;
-	struct ida msix_ida;
-};
-
-static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
-{
-	return container_of(nhi, struct tb_nhi_pci, nhi);
-}
-
 static int ring_interrupt_index(const struct tb_ring *ring)
 {
 	int bit = ring->hop;
@@ -170,7 +149,7 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
  *
  * Use only during init and shutdown.
  */
-static void nhi_disable_interrupts(struct tb_nhi *nhi)
+void nhi_disable_interrupts(struct tb_nhi *nhi)
 {
 	int i = 0;
 	/* disable interrupts */
@@ -455,7 +434,7 @@ static void ring_clear_msix(const struct tb_ring *ring)
 			  4 * (ring->nhi->hop_count / 32));
 }
 
-static irqreturn_t ring_msix(int irq, void *data)
+irqreturn_t ring_msix(int irq, void *data)
 {
 	struct tb_ring *ring = data;
 
@@ -469,55 +448,6 @@ static irqreturn_t ring_msix(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
-static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
-{
-	struct tb_nhi *nhi = ring->nhi;
-	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
-	struct pci_dev *pdev = to_pci_dev(nhi->dev);
-	unsigned long irqflags;
-	int ret;
-
-	if (!pdev->msix_enabled)
-		return 0;
-
-	ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
-	if (ret < 0)
-		return ret;
-
-	ring->vector = ret;
-
-	ret = pci_irq_vector(pdev, ring->vector);
-	if (ret < 0)
-		goto err_ida_remove;
-
-	ring->irq = ret;
-
-	irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
-	ret = request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
-	if (ret)
-		goto err_ida_remove;
-
-	return 0;
-
-err_ida_remove:
-	ida_free(&nhi_pci->msix_ida, ring->vector);
-
-	return ret;
-}
-
-static void ring_release_msix(struct tb_ring *ring)
-{
-	struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
-
-	if (ring->irq <= 0)
-		return;
-
-	free_irq(ring->irq, ring);
-	ida_free(&nhi_pci->msix_ida, ring->vector);
-	ring->vector = 0;
-	ring->irq = 0;
-}
-
 static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
 {
 	unsigned int start_hop = RING_FIRST_USABLE_HOPID;
@@ -630,8 +560,10 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 	if (!ring->descriptors)
 		goto err_free_ring;
 
-	if (ring_request_msix(ring, flags & RING_FLAG_NO_SUSPEND))
-		goto err_free_descs;
+	if (nhi->ops && nhi->ops->request_ring_irq) {
+		if (nhi->ops->request_ring_irq(ring, flags & RING_FLAG_NO_SUSPEND))
+			goto err_free_descs;
+	}
 
 	if (nhi_alloc_hop(nhi, ring))
 		goto err_release_msix;
@@ -639,7 +571,8 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 	return ring;
 
 err_release_msix:
-	ring_release_msix(ring);
+	if (nhi->ops && nhi->ops->release_ring_irq)
+		nhi->ops->release_ring_irq(ring);
 err_free_descs:
 	dma_free_coherent(ring->nhi->dev,
 			  ring->size * sizeof(*ring->descriptors),
@@ -829,6 +762,8 @@ EXPORT_SYMBOL_GPL(tb_ring_stop);
  */
 void tb_ring_free(struct tb_ring *ring)
 {
+	struct tb_nhi *nhi = ring->nhi;
+
 	spin_lock_irq(&ring->nhi->lock);
 	/*
 	 * Dissociate the ring from the NHI. This also ensures that
@@ -845,7 +780,8 @@ void tb_ring_free(struct tb_ring *ring)
 	}
 	spin_unlock_irq(&ring->nhi->lock);
 
-	ring_release_msix(ring);
+	if (nhi->ops && nhi->ops->release_ring_irq)
+		nhi->ops->release_ring_irq(ring);
 
 	dma_free_coherent(ring->nhi->dev,
 			  ring->size * sizeof(*ring->descriptors),
@@ -926,7 +862,7 @@ enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi)
 	return (enum nhi_fw_mode)val;
 }
 
-static void nhi_interrupt_work(struct work_struct *work)
+void nhi_interrupt_work(struct work_struct *work)
 {
 	struct tb_nhi *nhi = container_of(work, typeof(*nhi), interrupt_work);
 	int value = 0; /* Suppress uninitialized usage warning. */
@@ -978,7 +914,7 @@ static void nhi_interrupt_work(struct work_struct *work)
 	spin_unlock_irq(&nhi->lock);
 }
 
-static irqreturn_t nhi_msi(int irq, void *data)
+irqreturn_t nhi_msi(int irq, void *data)
 {
 	struct tb_nhi *nhi = data;
 	schedule_work(&nhi->interrupt_work);
@@ -987,8 +923,7 @@ static irqreturn_t nhi_msi(int irq, void *data)
 
 static int __nhi_suspend_noirq(struct device *dev, bool wakeup)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 	struct tb_nhi *nhi = tb->nhi;
 	int ret;
 
@@ -1012,21 +947,19 @@ static int nhi_suspend_noirq(struct device *dev)
 
 static int nhi_freeze_noirq(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 
 	return tb_domain_freeze_noirq(tb);
 }
 
 static int nhi_thaw_noirq(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 
 	return tb_domain_thaw_noirq(tb);
 }
 
-static bool nhi_wake_supported(struct pci_dev *pdev)
+static bool nhi_wake_supported(struct device *dev)
 {
 	u8 val;
 
@@ -1034,7 +967,7 @@ static bool nhi_wake_supported(struct pci_dev *pdev)
 	 * If power rails are sustainable for wakeup from S4 this
 	 * property is set by the BIOS.
 	 */
-	if (!device_property_read_u8(&pdev->dev, "WAKE_SUPPORTED", &val))
+	if (!device_property_read_u8(dev, "WAKE_SUPPORTED", &val))
 		return !!val;
 
 	return true;
@@ -1042,14 +975,13 @@ static bool nhi_wake_supported(struct pci_dev *pdev)
 
 static int nhi_poweroff_noirq(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
 	bool wakeup;
 
-	wakeup = device_may_wakeup(dev) && nhi_wake_supported(pdev);
+	wakeup = device_may_wakeup(dev) && nhi_wake_supported(dev);
 	return __nhi_suspend_noirq(dev, wakeup);
 }
 
-static void nhi_enable_int_throttling(struct tb_nhi *nhi)
+void nhi_enable_int_throttling(struct tb_nhi *nhi)
 {
 	/* Throttling is specified in 256ns increments */
 	u32 throttle = DIV_ROUND_UP(128 * NSEC_PER_USEC, 256);
@@ -1067,8 +999,7 @@ static void nhi_enable_int_throttling(struct tb_nhi *nhi)
 
 static int nhi_resume_noirq(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 	struct tb_nhi *nhi = tb->nhi;
 	int ret;
 
@@ -1077,7 +1008,7 @@ static int nhi_resume_noirq(struct device *dev)
 	 * unplugged last device which causes the host controller to go
 	 * away on PCs.
 	 */
-	if (!pci_device_is_present(pdev)) {
+	if ((nhi->ops->is_present && !nhi->ops->is_present(nhi))) {
 		nhi->going_away = true;
 	} else {
 		if (nhi->ops && nhi->ops->resume_noirq) {
@@ -1093,32 +1024,29 @@ static int nhi_resume_noirq(struct device *dev)
 
 static int nhi_suspend(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 
 	return tb_domain_suspend(tb);
 }
 
 static void nhi_complete(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 
 	/*
 	 * If we were runtime suspended when system suspend started,
 	 * schedule runtime resume now. It should bring the domain back
 	 * to functional state.
 	 */
-	if (pm_runtime_suspended(&pdev->dev))
-		pm_runtime_resume(&pdev->dev);
+	if (pm_runtime_suspended(dev))
+		pm_runtime_resume(dev);
 	else
 		tb_domain_complete(tb);
 }
 
 static int nhi_runtime_suspend(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 	struct tb_nhi *nhi = tb->nhi;
 	int ret;
 
@@ -1136,8 +1064,7 @@ static int nhi_runtime_suspend(struct device *dev)
 
 static int nhi_runtime_resume(struct device *dev)
 {
-	struct pci_dev *pdev = to_pci_dev(dev);
-	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb *tb = dev_get_drvdata(dev);
 	struct tb_nhi *nhi = tb->nhi;
 	int ret;
 
@@ -1151,10 +1078,8 @@ static int nhi_runtime_resume(struct device *dev)
 	return tb_domain_runtime_resume(tb);
 }
 
-static void nhi_shutdown(struct tb_nhi *nhi)
+void nhi_shutdown(struct tb_nhi *nhi)
 {
-	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
-	struct pci_dev *pdev = to_pci_dev(nhi->dev);
 	int i;
 
 	dev_dbg(nhi->dev, "shutdown\n");
@@ -1168,90 +1093,11 @@ static void nhi_shutdown(struct tb_nhi *nhi)
 				 "RX ring %d is still active\n", i);
 	}
 	nhi_disable_interrupts(nhi);
-	/*
-	 * We have to release the irq before calling flush_work. Otherwise an
-	 * already executing IRQ handler could call schedule_work again.
-	 */
-	if (!pdev->msix_enabled) {
-		devm_free_irq(nhi->dev, pdev->irq, nhi);
-		flush_work(&nhi->interrupt_work);
-	}
-	ida_destroy(&nhi_pci->msix_ida);
 
 	if (nhi->ops && nhi->ops->shutdown)
 		nhi->ops->shutdown(nhi);
 }
 
-static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci)
-{
-	struct tb_nhi *nhi = &nhi_pci->nhi;
-	struct pci_dev *pdev = to_pci_dev(nhi->dev);
-
-	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
-		/*
-		 * Intel hardware supports auto clear of the interrupt
-		 * status register right after interrupt is being
-		 * issued.
-		 */
-		nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
-
-		switch (pdev->device) {
-		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
-		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
-			/*
-			 * Falcon Ridge controller needs the end-to-end
-			 * flow control workaround to avoid losing Rx
-			 * packets when RING_FLAG_E2E is set.
-			 */
-			nhi->quirks |= QUIRK_E2E;
-			break;
-		}
-	}
-}
-
-static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data)
-{
-	if (!pdev->external_facing ||
-	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
-		return 0;
-	*(bool *)data = true;
-	return 1; /* Stop walking */
-}
-
-static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci)
-{
-	struct tb_nhi *nhi = &nhi_pci->nhi;
-	struct pci_dev *pdev = to_pci_dev(nhi->dev);
-	struct pci_bus *bus = pdev->bus;
-	bool port_ok = false;
-
-	/*
-	 * Ideally what we'd do here is grab every PCI device that
-	 * represents a tunnelling adapter for this NHI and check their
-	 * status directly, but unfortunately USB4 seems to make it
-	 * obnoxiously difficult to reliably make any correlation.
-	 *
-	 * So for now we'll have to bodge it... Hoping that the system
-	 * is at least sane enough that an adapter is in the same PCI
-	 * segment as its NHI, if we can find *something* on that segment
-	 * which meets the requirements for Kernel DMA Protection, we'll
-	 * take that to imply that firmware is aware and has (hopefully)
-	 * done the right thing in general. We need to know that the PCI
-	 * layer has seen the ExternalFacingPort property which will then
-	 * inform the IOMMU layer to enforce the complete "untrusted DMA"
-	 * flow, but also that the IOMMU driver itself can be trusted not
-	 * to have been subverted by a pre-boot DMA attack.
-	 */
-	while (bus->parent)
-		bus = bus->parent;
-
-	pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok);
-
-	nhi->iommu_dma_protection = port_ok;
-	dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
-		str_enabled_disabled(port_ok));
-}
-
 static void nhi_reset(struct tb_nhi *nhi)
 {
 	ktime_t timeout;
@@ -1283,53 +1129,6 @@ static void nhi_reset(struct tb_nhi *nhi)
 	dev_warn(nhi->dev, "timeout resetting host router\n");
 }
 
-static int nhi_init_msi(struct tb_nhi_pci *nhi_pci)
-{
-	struct tb_nhi *nhi = &nhi_pci->nhi;
-	struct pci_dev *pdev = to_pci_dev(nhi->dev);
-	struct device *dev = &pdev->dev;
-	int res, irq, nvec;
-
-	ida_init(&nhi_pci->msix_ida);
-
-	/*
-	 * The NHI has 16 MSI-X vectors or a single MSI. We first try to
-	 * get all MSI-X vectors and if we succeed, each ring will have
-	 * one MSI-X. If for some reason that does not work out, we
-	 * fallback to a single MSI.
-	 */
-	nvec = pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS,
-				     PCI_IRQ_MSIX);
-	if (nvec < 0) {
-		nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
-		if (nvec < 0)
-			return nvec;
-
-		INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
-
-		irq = pci_irq_vector(pdev, 0);
-		if (irq < 0)
-			return irq;
-
-		res = devm_request_irq(&pdev->dev, irq, nhi_msi,
-				       IRQF_NO_SUSPEND, "thunderbolt", nhi);
-		if (res)
-			return dev_err_probe(dev, res, "request_irq failed, aborting\n");
-	}
-
-	return 0;
-}
-
-static bool nhi_imr_valid(struct pci_dev *pdev)
-{
-	u8 val;
-
-	if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val))
-		return !!val;
-
-	return true;
-}
-
 static struct tb *nhi_select_cm(struct tb_nhi *nhi)
 {
 	struct tb *tb;
@@ -1353,64 +1152,40 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi)
 	return tb;
 }
 
-static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+int nhi_probe(struct tb_nhi *nhi)
 {
-	struct device *dev = &pdev->dev;
-	struct tb_nhi_pci *nhi_pci;
-	struct tb_nhi *nhi;
+	struct device *dev = nhi->dev;
 	struct tb *tb;
 	int res;
 
-	if (!nhi_imr_valid(pdev))
-		return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n");
-
-	res = pcim_enable_device(pdev);
-	if (res)
-		return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
-
-	nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
-	if (!nhi_pci)
-		return -ENOMEM;
-
-	nhi = &nhi_pci->nhi;
-	nhi->dev = dev;
-	nhi->ops = (const struct tb_nhi_ops *)id->driver_data;
-
-	nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
-	res = PTR_ERR_OR_ZERO(nhi->iobase);
-	if (res)
-		return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n");
-
 	nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff;
 	dev_dbg(dev, "total paths: %d\n", nhi->hop_count);
 
-	nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
+	nhi->tx_rings = devm_kcalloc(dev, nhi->hop_count,
 				     sizeof(*nhi->tx_rings), GFP_KERNEL);
-	nhi->rx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
+	nhi->rx_rings = devm_kcalloc(dev, nhi->hop_count,
 				     sizeof(*nhi->rx_rings), GFP_KERNEL);
 	if (!nhi->tx_rings || !nhi->rx_rings)
 		return -ENOMEM;
 
-	nhi_check_quirks(nhi_pci);
-	nhi_check_iommu(nhi_pci);
 	nhi_reset(nhi);
 
 	/* In case someone left them on. */
 	nhi_disable_interrupts(nhi);
 	nhi_enable_int_throttling(nhi);
 
-	res = nhi_init_msi(nhi_pci);
-	if (res)
-		return dev_err_probe(dev, res, "cannot enable MSI, aborting\n");
+	if (nhi->ops && nhi->ops->init_interrupts) {
+		res = nhi->ops->init_interrupts(nhi);
+		if (res)
+			return dev_err_probe(dev, res, "cannot enable interrupts, aborting\n");
+	}
 
 	spin_lock_init(&nhi->lock);
 
-	res = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	res = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
 	if (res)
 		return dev_err_probe(dev, res, "failed to set DMA mask\n");
 
-	pci_set_master(pdev);
-
 	if (nhi->ops && nhi->ops->init) {
 		res = nhi->ops->init(nhi);
 		if (res)
@@ -1434,37 +1209,24 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 		nhi_shutdown(nhi);
 		return res;
 	}
-	pci_set_drvdata(pdev, tb);
+	dev_set_drvdata(dev, tb);
 
-	device_wakeup_enable(&pdev->dev);
+	device_wakeup_enable(dev);
 
-	pm_runtime_allow(&pdev->dev);
-	pm_runtime_set_autosuspend_delay(&pdev->dev, TB_AUTOSUSPEND_DELAY);
-	pm_runtime_use_autosuspend(&pdev->dev);
-	pm_runtime_put_autosuspend(&pdev->dev);
+	pm_runtime_allow(dev);
+	pm_runtime_set_autosuspend_delay(dev, TB_AUTOSUSPEND_DELAY);
+	pm_runtime_use_autosuspend(dev);
+	pm_runtime_put_autosuspend(dev);
 
 	return 0;
 }
 
-static void nhi_remove(struct pci_dev *pdev)
-{
-	struct tb *tb = pci_get_drvdata(pdev);
-	struct tb_nhi *nhi = tb->nhi;
-
-	pm_runtime_get_sync(&pdev->dev);
-	pm_runtime_dont_use_autosuspend(&pdev->dev);
-	pm_runtime_forbid(&pdev->dev);
-
-	tb_domain_remove(tb);
-	nhi_shutdown(nhi);
-}
-
 /*
  * The tunneled pci bridges are siblings of us. Use resume_noirq to reenable
  * the tunnels asap. A corresponding pci quirk blocks the downstream bridges
  * resume_noirq until we are done.
  */
-static const struct dev_pm_ops nhi_pm_ops = {
+const struct dev_pm_ops nhi_pm_ops = {
 	.suspend_noirq = nhi_suspend_noirq,
 	.resume_noirq = nhi_resume_noirq,
 	.freeze_noirq = nhi_freeze_noirq,  /*
@@ -1480,198 +1242,3 @@ static const struct dev_pm_ops nhi_pm_ops = {
 	.runtime_suspend = nhi_runtime_suspend,
 	.runtime_resume = nhi_runtime_resume,
 };
-
-/*
- * During suspend the Thunderbolt controller is reset and all PCIe
- * tunnels are lost. The NHI driver will try to reestablish all tunnels
- * during resume. This adds device links between the tunneled PCIe
- * downstream ports and the NHI so that the device core will make sure
- * NHI is resumed first before the rest.
- */
-bool tb_apple_add_links(struct tb_nhi *nhi)
-{
-	struct pci_dev *nhi_pdev = to_pci_dev(nhi->dev);
-	struct pci_dev *upstream, *pdev;
-	bool ret;
-
-	if (!x86_apple_machine)
-		return false;
-
-	switch (nhi_pdev->device) {
-	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
-	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
-	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
-	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
-		break;
-	default:
-		return false;
-	}
-
-	upstream = pci_upstream_bridge(nhi_pdev);
-	while (upstream) {
-		if (!pci_is_pcie(upstream))
-			return false;
-		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
-			break;
-		upstream = pci_upstream_bridge(upstream);
-	}
-
-	if (!upstream)
-		return false;
-
-	/*
-	 * For each hotplug downstream port, create add device link
-	 * back to NHI so that PCIe tunnels can be re-established after
-	 * sleep.
-	 */
-	ret = false;
-	for_each_pci_bridge(pdev, upstream->subordinate) {
-		const struct device_link *link;
-
-		if (!pci_is_pcie(pdev))
-			continue;
-		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
-		    !pdev->is_pciehp)
-			continue;
-
-		link = device_link_add(&pdev->dev, nhi->dev,
-				       DL_FLAG_AUTOREMOVE_SUPPLIER |
-				       DL_FLAG_PM_RUNTIME);
-		if (link) {
-			dev_dbg(nhi->dev, "created link from %s\n",
-				dev_name(&pdev->dev));
-			ret = true;
-		} else {
-			dev_warn(nhi->dev, "device link creation from %s failed\n",
-				 dev_name(&pdev->dev));
-		}
-	}
-
-	return ret;
-}
-
-static struct pci_device_id nhi_ids[] = {
-	/*
-	 * We have to specify class, the TB bridges use the same device and
-	 * vendor (sub)id on gen 1 and gen 2 controllers.
-	 */
-	{
-		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
-		.vendor = PCI_VENDOR_ID_INTEL,
-		.device = PCI_DEVICE_ID_INTEL_LIGHT_RIDGE,
-		.subvendor = 0x2222, .subdevice = 0x1111,
-	},
-	{
-		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
-		.vendor = PCI_VENDOR_ID_INTEL,
-		.device = PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C,
-		.subvendor = 0x2222, .subdevice = 0x1111,
-	},
-	{
-		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
-		.vendor = PCI_VENDOR_ID_INTEL,
-		.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI,
-		.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
-	},
-	{
-		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
-		.vendor = PCI_VENDOR_ID_INTEL,
-		.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI,
-		.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
-	},
-
-	/* Thunderbolt 3 */
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	/* Thunderbolt 4 */
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0),
-	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
-	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
-
-	/* Any USB4 compliant host */
-	{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
-
-	{ 0,}
-};
-
-MODULE_DEVICE_TABLE(pci, nhi_ids);
-MODULE_DESCRIPTION("Thunderbolt/USB4 core driver");
-MODULE_LICENSE("GPL");
-
-static struct pci_driver nhi_driver = {
-	.name = "thunderbolt",
-	.id_table = nhi_ids,
-	.probe = nhi_probe,
-	.remove = nhi_remove,
-	.shutdown = nhi_remove,
-	.driver.pm = &nhi_pm_ops,
-};
-
-static int __init nhi_init(void)
-{
-	int ret;
-
-	ret = tb_domain_init();
-	if (ret)
-		return ret;
-	ret = pci_register_driver(&nhi_driver);
-	if (ret)
-		tb_domain_exit();
-	return ret;
-}
-
-static void __exit nhi_unload(void)
-{
-	pci_unregister_driver(&nhi_driver);
-	tb_domain_exit();
-}
-
-rootfs_initcall(nhi_init);
-module_exit(nhi_unload);
diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index efcd119e26f8..0b172c724b42 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -30,6 +30,14 @@ enum nhi_mailbox_cmd {
 int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data);
 enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi);
 bool tb_apple_add_links(struct tb_nhi *nhi);
+void nhi_enable_int_throttling(struct tb_nhi *nhi);
+void nhi_disable_interrupts(struct tb_nhi *nhi);
+void nhi_interrupt_work(struct work_struct *work);
+irqreturn_t nhi_msi(int irq, void *data);
+irqreturn_t ring_msix(int irq, void *data);
+int nhi_probe(struct tb_nhi *nhi);
+void nhi_shutdown(struct tb_nhi *nhi);
+extern const struct dev_pm_ops nhi_pm_ops;
 
 /**
  * struct tb_nhi_ops - NHI specific optional operations
@@ -39,6 +47,12 @@ bool tb_apple_add_links(struct tb_nhi *nhi);
  * @runtime_suspend: NHI specific runtime_suspend hook
  * @runtime_resume: NHI specific runtime_resume hook
  * @shutdown: NHI specific shutdown
+ * @pre_nvm_auth: hook to run before TBT3 NVM authentication
+ * @post_nvm_auth: hook to run after TBT3 NVM authentication
+ * @request_ring_irq: NHI specific interrupt retrieval hook
+ * @release_ring_irq: NHI specific interrupt release hook
+ * @is_present: Whether the device is currently present on the parent bus
+ * @init_interrupts: NHI specific interrupt initialization hook
  */
 struct tb_nhi_ops {
 	int (*init)(struct tb_nhi *nhi);
@@ -47,6 +61,12 @@ struct tb_nhi_ops {
 	int (*runtime_suspend)(struct tb_nhi *nhi);
 	int (*runtime_resume)(struct tb_nhi *nhi);
 	void (*shutdown)(struct tb_nhi *nhi);
+	void (*pre_nvm_auth)(struct tb_nhi *nhi);
+	void (*post_nvm_auth)(struct tb_nhi *nhi);
+	int (*request_ring_irq)(struct tb_ring *ring, bool no_suspend);
+	void (*release_ring_irq)(struct tb_ring *ring);
+	bool (*is_present)(struct tb_nhi *nhi);
+	int (*init_interrupts)(struct tb_nhi *nhi);
 };
 
 extern const struct tb_nhi_ops icl_nhi_ops;
@@ -101,4 +121,15 @@ extern const struct tb_nhi_ops icl_nhi_ops;
 
 #define PCI_CLASS_SERIAL_USB_USB4			0x0c0340
 
+/* Host interface quirks */
+#define QUIRK_AUTO_CLEAR_INT	BIT(0)
+#define QUIRK_E2E		BIT(1)
+
+/*
+ * Minimal number of vectors when we use MSI-X. Two for control channel
+ * Rx/Tx and the rest four are for cross domain DMA paths.
+ */
+#define MSIX_MIN_VECS		6
+#define MSIX_MAX_VECS		16
+
 #endif
diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c
index 8c50066f3411..530337a78322 100644
--- a/drivers/thunderbolt/nhi_ops.c
+++ b/drivers/thunderbolt/nhi_ops.c
@@ -11,6 +11,7 @@
 
 #include "nhi.h"
 #include "nhi_regs.h"
+#include "pci.h"
 #include "tb.h"
 
 /* Ice Lake specific NHI operations */
@@ -176,6 +177,8 @@ static int icl_nhi_resume(struct tb_nhi *nhi)
 
 static void icl_nhi_shutdown(struct tb_nhi *nhi)
 {
+	nhi_pci_shutdown(nhi);
+
 	icl_nhi_force_power(nhi, false);
 }
 
@@ -186,4 +189,10 @@ const struct tb_nhi_ops icl_nhi_ops = {
 	.runtime_suspend = icl_nhi_suspend,
 	.runtime_resume = icl_nhi_resume,
 	.shutdown = icl_nhi_shutdown,
+	.pre_nvm_auth = nhi_pci_start_dma_port,
+	.post_nvm_auth = nhi_pci_complete_dma_port,
+	.request_ring_irq = nhi_pci_ring_request_msix,
+	.release_ring_irq = nhi_pci_ring_release_msix,
+	.is_present = nhi_pci_is_present,
+	.init_interrupts = nhi_pci_init_msi,
 };
diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
new file mode 100644
index 000000000000..400ba88db034
--- /dev/null
+++ b/drivers/thunderbolt/pci.c
@@ -0,0 +1,507 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Thunderbolt driver - PCI NHI driver
+ *
+ * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
+ * Copyright (C) 2018, Intel Corporation
+ */
+
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/platform_data/x86/apple.h>
+#include <linux/property.h>
+#include <linux/string_helpers.h>
+
+#include "nhi.h"
+#include "nhi_regs.h"
+#include "pci.h"
+#include "tb.h"
+
+struct tb_nhi_pci {
+	struct tb_nhi nhi;
+	struct ida msix_ida;
+};
+
+static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
+{
+	return container_of(nhi, struct tb_nhi_pci, nhi);
+}
+
+static void nhi_pci_check_quirks(struct tb_nhi_pci *nhi_pci)
+{
+	struct tb_nhi *nhi = &nhi_pci->nhi;
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+
+	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+		/*
+		 * Intel hardware supports auto clear of the interrupt
+		 * status register right after interrupt is being
+		 * issued.
+		 */
+		nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
+
+		switch (pdev->device) {
+		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
+		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
+			/*
+			 * Falcon Ridge controller needs the end-to-end
+			 * flow control workaround to avoid losing Rx
+			 * packets when RING_FLAG_E2E is set.
+			 */
+			nhi->quirks |= QUIRK_E2E;
+			break;
+		}
+	}
+}
+
+static int nhi_pci_check_iommu_pdev(struct pci_dev *pdev, void *data)
+{
+	if (!pdev->external_facing ||
+	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
+		return 0;
+	*(bool *)data = true;
+	return 1; /* Stop walking */
+}
+
+static void nhi_pci_check_iommu(struct tb_nhi_pci *nhi_pci)
+{
+	struct tb_nhi *nhi = &nhi_pci->nhi;
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+	struct pci_bus *bus = pdev->bus;
+	bool port_ok = false;
+
+	/*
+	 * Ideally what we'd do here is grab every PCI device that
+	 * represents a tunnelling adapter for this NHI and check their
+	 * status directly, but unfortunately USB4 seems to make it
+	 * obnoxiously difficult to reliably make any correlation.
+	 *
+	 * So for now we'll have to bodge it... Hoping that the system
+	 * is at least sane enough that an adapter is in the same PCI
+	 * segment as its NHI, if we can find *something* on that segment
+	 * which meets the requirements for Kernel DMA Protection, we'll
+	 * take that to imply that firmware is aware and has (hopefully)
+	 * done the right thing in general. We need to know that the PCI
+	 * layer has seen the ExternalFacingPort property which will then
+	 * inform the IOMMU layer to enforce the complete "untrusted DMA"
+	 * flow, but also that the IOMMU driver itself can be trusted not
+	 * to have been subverted by a pre-boot DMA attack.
+	 */
+	while (bus->parent)
+		bus = bus->parent;
+
+	pci_walk_bus(bus, nhi_pci_check_iommu_pdev, &port_ok);
+
+	nhi->iommu_dma_protection = port_ok;
+	dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
+		str_enabled_disabled(port_ok));
+}
+
+int nhi_pci_init_msi(struct tb_nhi *nhi)
+{
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+	struct device *dev = &pdev->dev;
+	int res, irq, nvec;
+
+	ida_init(&nhi_pci->msix_ida);
+
+	/*
+	 * The NHI has 16 MSI-X vectors or a single MSI. We first try to
+	 * get all MSI-X vectors and if we succeed, each ring will have
+	 * one MSI-X. If for some reason that does not work out, we
+	 * fallback to a single MSI.
+	 */
+	nvec = pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS,
+				     PCI_IRQ_MSIX);
+	if (nvec < 0) {
+		nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
+		if (nvec < 0)
+			return nvec;
+
+		INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
+
+		irq = pci_irq_vector(pdev, 0);
+		if (irq < 0)
+			return irq;
+
+		res = devm_request_irq(&pdev->dev, irq, nhi_msi,
+				       IRQF_NO_SUSPEND, "thunderbolt", nhi);
+		if (res)
+			return dev_err_probe(dev, res, "request_irq failed, aborting\n");
+	}
+
+	return 0;
+}
+
+static bool nhi_pci_imr_valid(struct pci_dev *pdev)
+{
+	u8 val;
+
+	if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val))
+		return !!val;
+
+	return true;
+}
+
+void nhi_pci_start_dma_port(struct tb_nhi *nhi)
+{
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+	struct pci_dev *root_port;
+
+	/*
+	 * During host router NVM upgrade we should not allow root port to
+	 * go into D3cold because some root ports cannot trigger PME
+	 * itself. To be on the safe side keep the root port in D0 during
+	 * the whole upgrade process.
+	 */
+	root_port = pcie_find_root_port(pdev);
+	if (root_port)
+		pm_runtime_get_noresume(&root_port->dev);
+}
+
+void nhi_pci_complete_dma_port(struct tb_nhi *nhi)
+{
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+	struct pci_dev *root_port;
+
+	root_port = pcie_find_root_port(pdev);
+	if (root_port)
+		pm_runtime_put(&root_port->dev);
+}
+
+int nhi_pci_ring_request_msix(struct tb_ring *ring, bool no_suspend)
+{
+	struct tb_nhi *nhi = ring->nhi;
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+	unsigned long irqflags;
+	int ret;
+
+	if (!pdev->msix_enabled)
+		return 0;
+
+	ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
+	if (ret < 0)
+		return ret;
+
+	ring->vector = ret;
+
+	ret = pci_irq_vector(pdev, ring->vector);
+	if (ret < 0)
+		goto err_ida_remove;
+
+	ring->irq = ret;
+
+	irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
+	ret = request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
+	if (ret)
+		goto err_ida_remove;
+
+	return 0;
+
+err_ida_remove:
+	ida_free(&nhi_pci->msix_ida, ring->vector);
+
+	return ret;
+}
+
+void nhi_pci_ring_release_msix(struct tb_ring *ring)
+{
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
+
+	if (ring->irq <= 0)
+		return;
+
+	free_irq(ring->irq, ring);
+	ida_free(&nhi_pci->msix_ida, ring->vector);
+	ring->vector = 0;
+	ring->irq = 0;
+}
+
+void nhi_pci_shutdown(struct tb_nhi *nhi)
+{
+	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+	struct pci_dev *pdev = to_pci_dev(nhi->dev);
+
+	/*
+	 * We have to release the irq before calling flush_work. Otherwise an
+	 * already executing IRQ handler could call schedule_work again.
+	 */
+	if (!pdev->msix_enabled) {
+		devm_free_irq(nhi->dev, pdev->irq, nhi);
+		flush_work(&nhi->interrupt_work);
+	}
+	ida_destroy(&nhi_pci->msix_ida);
+}
+
+bool nhi_pci_is_present(struct tb_nhi *nhi)
+{
+	return pci_device_is_present(to_pci_dev(nhi->dev));
+}
+
+static const struct tb_nhi_ops pci_nhi_default_ops = {
+	.pre_nvm_auth = nhi_pci_start_dma_port,
+	.post_nvm_auth = nhi_pci_complete_dma_port,
+	.request_ring_irq = nhi_pci_ring_request_msix,
+	.release_ring_irq = nhi_pci_ring_release_msix,
+	.shutdown = nhi_pci_shutdown,
+	.is_present = nhi_pci_is_present,
+	.init_interrupts = nhi_pci_init_msi,
+};
+
+static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct device *dev = &pdev->dev;
+	struct tb_nhi_pci *nhi_pci;
+	struct tb_nhi *nhi;
+	int res;
+
+	if (!nhi_pci_imr_valid(pdev))
+		return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n");
+
+	res = pcim_enable_device(pdev);
+	if (res)
+		return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
+
+	nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
+	if (!nhi_pci)
+		return -ENOMEM;
+
+	nhi = &nhi_pci->nhi;
+	nhi->dev = dev;
+	nhi->ops = (const struct tb_nhi_ops *)id->driver_data ?: &pci_nhi_default_ops;
+
+	nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
+	res = PTR_ERR_OR_ZERO(nhi->iobase);
+	if (res)
+		return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n");
+
+	nhi_pci_check_quirks(nhi_pci);
+	nhi_pci_check_iommu(nhi_pci);
+
+	pci_set_master(pdev);
+
+	res = nhi_probe(&nhi_pci->nhi);
+	if (res)
+		return dev_err_probe(dev, res, "NHI common probe failed\n");
+
+	return 0;
+}
+
+static void nhi_pci_remove(struct pci_dev *pdev)
+{
+	struct tb *tb = pci_get_drvdata(pdev);
+	struct tb_nhi *nhi = tb->nhi;
+
+	pm_runtime_get_sync(&pdev->dev);
+	pm_runtime_dont_use_autosuspend(&pdev->dev);
+	pm_runtime_forbid(&pdev->dev);
+
+	tb_domain_remove(tb);
+	nhi_shutdown(nhi);
+}
+
+/*
+ * During suspend the Thunderbolt controller is reset and all PCIe
+ * tunnels are lost. The NHI driver will try to reestablish all tunnels
+ * during resume. This adds device links between the tunneled PCIe
+ * downstream ports and the NHI so that the device core will make sure
+ * NHI is resumed first before the rest.
+ */
+bool tb_apple_add_links(struct tb_nhi *nhi)
+{
+	struct pci_dev *nhi_pdev = to_pci_dev(nhi->dev);
+	struct pci_dev *upstream, *pdev;
+	bool ret;
+
+	if (!x86_apple_machine)
+		return false;
+
+	switch (nhi_pdev->device) {
+	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
+	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
+	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
+	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
+		break;
+	default:
+		return false;
+	}
+
+	upstream = pci_upstream_bridge(nhi_pdev);
+	while (upstream) {
+		if (!pci_is_pcie(upstream))
+			return false;
+		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
+			break;
+		upstream = pci_upstream_bridge(upstream);
+	}
+
+	if (!upstream)
+		return false;
+
+	/*
+	 * For each hotplug downstream port, create add device link
+	 * back to NHI so that PCIe tunnels can be re-established after
+	 * sleep.
+	 */
+	ret = false;
+	for_each_pci_bridge(pdev, upstream->subordinate) {
+		const struct device_link *link;
+
+		if (!pci_is_pcie(pdev))
+			continue;
+		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
+		    !pdev->is_pciehp)
+			continue;
+
+		link = device_link_add(&pdev->dev, nhi->dev,
+				       DL_FLAG_AUTOREMOVE_SUPPLIER |
+				       DL_FLAG_PM_RUNTIME);
+		if (link) {
+			dev_dbg(nhi->dev, "created link from %s\n",
+				dev_name(&pdev->dev));
+			ret = true;
+		} else {
+			dev_warn(nhi->dev, "device link creation from %s failed\n",
+				 dev_name(&pdev->dev));
+		}
+	}
+
+	return ret;
+}
+
+static struct pci_device_id nhi_ids[] = {
+	/*
+	 * We have to specify class, the TB bridges use the same device and
+	 * vendor (sub)id on gen 1 and gen 2 controllers.
+	 */
+	{
+		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+		.vendor = PCI_VENDOR_ID_INTEL,
+		.device = PCI_DEVICE_ID_INTEL_LIGHT_RIDGE,
+		.subvendor = 0x2222, .subdevice = 0x1111,
+	},
+	{
+		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+		.vendor = PCI_VENDOR_ID_INTEL,
+		.device = PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C,
+		.subvendor = 0x2222, .subdevice = 0x1111,
+	},
+	{
+		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+		.vendor = PCI_VENDOR_ID_INTEL,
+		.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI,
+		.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
+	},
+	{
+		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+		.vendor = PCI_VENDOR_ID_INTEL,
+		.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI,
+		.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
+	},
+
+	/* Thunderbolt 3 */
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	/* Thunderbolt 4 */
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0),
+	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
+	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
+
+	/* Any USB4 compliant host */
+	{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
+
+	{ 0,}
+};
+
+MODULE_DEVICE_TABLE(pci, nhi_ids);
+MODULE_DESCRIPTION("Thunderbolt/USB4 core driver");
+MODULE_LICENSE("GPL");
+
+static struct pci_driver nhi_driver = {
+	.name = "thunderbolt",
+	.id_table = nhi_ids,
+	.probe = nhi_pci_probe,
+	.remove = nhi_pci_remove,
+	.shutdown = nhi_pci_remove,
+	.driver.pm = &nhi_pm_ops,
+};
+
+static int __init nhi_init(void)
+{
+	int ret;
+
+	ret = tb_domain_init();
+	if (ret)
+		return ret;
+
+	ret = pci_register_driver(&nhi_driver);
+	if (ret)
+		tb_domain_exit();
+
+	return ret;
+}
+
+static void __exit nhi_unload(void)
+{
+	pci_unregister_driver(&nhi_driver);
+	tb_domain_exit();
+}
+
+rootfs_initcall(nhi_init);
+module_exit(nhi_unload);
diff --git a/drivers/thunderbolt/pci.h b/drivers/thunderbolt/pci.h
new file mode 100644
index 000000000000..8ce272a10661
--- /dev/null
+++ b/drivers/thunderbolt/pci.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+ */
+
+#ifndef __TBT_PCI_H
+#define __TBT_PCI_H
+
+#include <linux/types.h>
+
+void nhi_pci_start_dma_port(struct tb_nhi *nhi);
+void nhi_pci_complete_dma_port(struct tb_nhi *nhi);
+int nhi_pci_ring_request_msix(struct tb_ring *ring, bool no_suspend);
+void nhi_pci_ring_release_msix(struct tb_ring *ring);
+bool nhi_pci_is_present(struct tb_nhi *nhi);
+void nhi_pci_shutdown(struct tb_nhi *nhi);
+int nhi_pci_init_msi(struct tb_nhi *nhi);
+
+#endif
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 0680209e349c..9647650ee02d 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -209,32 +209,6 @@ static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
 	return -ETIMEDOUT;
 }
 
-static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
-{
-	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
-	struct pci_dev *root_port;
-
-	/*
-	 * During host router NVM upgrade we should not allow root port to
-	 * go into D3cold because some root ports cannot trigger PME
-	 * itself. To be on the safe side keep the root port in D0 during
-	 * the whole upgrade process.
-	 */
-	root_port = pcie_find_root_port(pdev);
-	if (root_port)
-		pm_runtime_get_noresume(&root_port->dev);
-}
-
-static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
-{
-	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
-	struct pci_dev *root_port;
-
-	root_port = pcie_find_root_port(pdev);
-	if (root_port)
-		pm_runtime_put(&root_port->dev);
-}
-
 static inline bool nvm_readable(struct tb_switch *sw)
 {
 	if (tb_switch_is_usb4(sw)) {
@@ -260,6 +234,7 @@ static inline bool nvm_upgradeable(struct tb_switch *sw)
 
 static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
 {
+	struct tb_nhi *nhi = sw->tb->nhi;
 	int ret;
 
 	if (tb_switch_is_usb4(sw)) {
@@ -276,7 +251,8 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
 
 	sw->nvm->authenticating = true;
 	if (!tb_route(sw)) {
-		nvm_authenticate_start_dma_port(sw);
+		if (nhi->ops && nhi->ops->pre_nvm_auth)
+			nhi->ops->pre_nvm_auth(nhi);
 		ret = nvm_authenticate_host_dma_port(sw);
 	} else {
 		ret = nvm_authenticate_device_dma_port(sw);
@@ -2745,6 +2721,7 @@ static int tb_switch_set_uuid(struct tb_switch *sw)
 
 static int tb_switch_add_dma_port(struct tb_switch *sw)
 {
+	struct tb_nhi *nhi = sw->tb->nhi;
 	u32 status;
 	int ret;
 
@@ -2804,8 +2781,10 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
 	 */
 	nvm_get_auth_status(sw, &status);
 	if (status) {
-		if (!tb_route(sw))
-			nvm_authenticate_complete_dma_port(sw);
+		if (!tb_route(sw)) {
+			if (nhi->ops && nhi->ops->post_nvm_auth)
+				nhi->ops->post_nvm_auth(nhi);
+		}
 		return 0;
 	}
 
@@ -2819,8 +2798,10 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
 		return ret;
 
 	/* Now we can allow root port to suspend again */
-	if (!tb_route(sw))
-		nvm_authenticate_complete_dma_port(sw);
+	if (!tb_route(sw)) {
+		if (nhi->ops && nhi->ops->post_nvm_auth)
+			nhi->ops->post_nvm_auth(nhi);
+	}
 
 	if (status) {
 		tb_sw_info(sw, "switch flash authentication failed\n");

-- 
2.54.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/4] thunderbolt: Require nhi->ops be valid
  2026-04-28 18:49 [PATCH v2 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
  2026-04-28 18:49 ` [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
  2026-04-28 18:49 ` [PATCH v2 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
@ 2026-04-28 18:49 ` Konrad Dybcio
  2026-04-28 18:49 ` [PATCH v2 4/4] thunderbolt: Add some more descriptive probe error messages Konrad Dybcio
  3 siblings, 0 replies; 13+ messages in thread
From: Konrad Dybcio @ 2026-04-28 18:49 UTC (permalink / raw)
  To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
  Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
	Konrad Dybcio

From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>

Because of how fundamental ops->init_interrupts() is, it no longer
makes sense to consider cases where nhi->ops is NULL.

Drop some boilerplate around it and add a single sanity-check in
nhi_probe() instead.

Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
 drivers/thunderbolt/nhi.c    | 32 ++++++++++++++++++--------------
 drivers/thunderbolt/switch.c |  6 +++---
 2 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 3ceca434155d..917069726a9f 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -560,7 +560,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 	if (!ring->descriptors)
 		goto err_free_ring;
 
-	if (nhi->ops && nhi->ops->request_ring_irq) {
+	if (nhi->ops->request_ring_irq) {
 		if (nhi->ops->request_ring_irq(ring, flags & RING_FLAG_NO_SUSPEND))
 			goto err_free_descs;
 	}
@@ -571,7 +571,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
 	return ring;
 
 err_release_msix:
-	if (nhi->ops && nhi->ops->release_ring_irq)
+	if (nhi->ops->release_ring_irq)
 		nhi->ops->release_ring_irq(ring);
 err_free_descs:
 	dma_free_coherent(ring->nhi->dev,
@@ -780,7 +780,7 @@ void tb_ring_free(struct tb_ring *ring)
 	}
 	spin_unlock_irq(&ring->nhi->lock);
 
-	if (nhi->ops && nhi->ops->release_ring_irq)
+	if (nhi->ops->release_ring_irq)
 		nhi->ops->release_ring_irq(ring);
 
 	dma_free_coherent(ring->nhi->dev,
@@ -931,7 +931,7 @@ static int __nhi_suspend_noirq(struct device *dev, bool wakeup)
 	if (ret)
 		return ret;
 
-	if (nhi->ops && nhi->ops->suspend_noirq) {
+	if (nhi->ops->suspend_noirq) {
 		ret = nhi->ops->suspend_noirq(tb->nhi, wakeup);
 		if (ret)
 			return ret;
@@ -1011,7 +1011,7 @@ static int nhi_resume_noirq(struct device *dev)
 	if ((nhi->ops->is_present && !nhi->ops->is_present(nhi))) {
 		nhi->going_away = true;
 	} else {
-		if (nhi->ops && nhi->ops->resume_noirq) {
+		if (nhi->ops->resume_noirq) {
 			ret = nhi->ops->resume_noirq(nhi);
 			if (ret)
 				return ret;
@@ -1054,7 +1054,7 @@ static int nhi_runtime_suspend(struct device *dev)
 	if (ret)
 		return ret;
 
-	if (nhi->ops && nhi->ops->runtime_suspend) {
+	if (nhi->ops->runtime_suspend) {
 		ret = nhi->ops->runtime_suspend(tb->nhi);
 		if (ret)
 			return ret;
@@ -1068,7 +1068,7 @@ static int nhi_runtime_resume(struct device *dev)
 	struct tb_nhi *nhi = tb->nhi;
 	int ret;
 
-	if (nhi->ops && nhi->ops->runtime_resume) {
+	if (nhi->ops->runtime_resume) {
 		ret = nhi->ops->runtime_resume(nhi);
 		if (ret)
 			return ret;
@@ -1094,7 +1094,7 @@ void nhi_shutdown(struct tb_nhi *nhi)
 	}
 	nhi_disable_interrupts(nhi);
 
-	if (nhi->ops && nhi->ops->shutdown)
+	if (nhi->ops->shutdown)
 		nhi->ops->shutdown(nhi);
 }
 
@@ -1158,6 +1158,12 @@ int nhi_probe(struct tb_nhi *nhi)
 	struct tb *tb;
 	int res;
 
+	if (!nhi->ops)
+		return dev_err_probe(dev, -EINVAL, "NHI ops not set\n");
+
+	if (!nhi->ops->init_interrupts)
+		return dev_err_probe(dev, -EINVAL, "missing required NHI ops\n");
+
 	nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff;
 	dev_dbg(dev, "total paths: %d\n", nhi->hop_count);
 
@@ -1174,11 +1180,9 @@ int nhi_probe(struct tb_nhi *nhi)
 	nhi_disable_interrupts(nhi);
 	nhi_enable_int_throttling(nhi);
 
-	if (nhi->ops && nhi->ops->init_interrupts) {
-		res = nhi->ops->init_interrupts(nhi);
-		if (res)
-			return dev_err_probe(dev, res, "cannot enable interrupts, aborting\n");
-	}
+	res = nhi->ops->init_interrupts(nhi);
+	if (res)
+		return dev_err_probe(dev, res, "cannot enable interrupts, aborting\n");
 
 	spin_lock_init(&nhi->lock);
 
@@ -1186,7 +1190,7 @@ int nhi_probe(struct tb_nhi *nhi)
 	if (res)
 		return dev_err_probe(dev, res, "failed to set DMA mask\n");
 
-	if (nhi->ops && nhi->ops->init) {
+	if (nhi->ops->init) {
 		res = nhi->ops->init(nhi);
 		if (res)
 			return res;
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 9647650ee02d..5f09460af1b8 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -251,7 +251,7 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
 
 	sw->nvm->authenticating = true;
 	if (!tb_route(sw)) {
-		if (nhi->ops && nhi->ops->pre_nvm_auth)
+		if (nhi->ops->pre_nvm_auth)
 			nhi->ops->pre_nvm_auth(nhi);
 		ret = nvm_authenticate_host_dma_port(sw);
 	} else {
@@ -2782,7 +2782,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
 	nvm_get_auth_status(sw, &status);
 	if (status) {
 		if (!tb_route(sw)) {
-			if (nhi->ops && nhi->ops->post_nvm_auth)
+			if (nhi->ops->post_nvm_auth)
 				nhi->ops->post_nvm_auth(nhi);
 		}
 		return 0;
@@ -2799,7 +2799,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
 
 	/* Now we can allow root port to suspend again */
 	if (!tb_route(sw)) {
-		if (nhi->ops && nhi->ops->post_nvm_auth)
+		if (nhi->ops->post_nvm_auth)
 			nhi->ops->post_nvm_auth(nhi);
 	}
 

-- 
2.54.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/4] thunderbolt: Add some more descriptive probe error messages
  2026-04-28 18:49 [PATCH v2 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
                   ` (2 preceding siblings ...)
  2026-04-28 18:49 ` [PATCH v2 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
@ 2026-04-28 18:49 ` Konrad Dybcio
  3 siblings, 0 replies; 13+ messages in thread
From: Konrad Dybcio @ 2026-04-28 18:49 UTC (permalink / raw)
  To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
  Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
	Konrad Dybcio

From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>

Currently there's a lot of silent error-return paths in various places
where nhi_probe() can fail. Sprinkle some prints to make it clearer
where the problem is.

Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
 drivers/thunderbolt/nhi.c | 4 ++--
 drivers/thunderbolt/tb.c  | 7 ++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 917069726a9f..28376a6f914b 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -1193,7 +1193,7 @@ int nhi_probe(struct tb_nhi *nhi)
 	if (nhi->ops->init) {
 		res = nhi->ops->init(nhi);
 		if (res)
-			return res;
+			return dev_err_probe(dev, res, "NHI specific init failed\n");
 	}
 
 	tb = nhi_select_cm(nhi);
@@ -1211,7 +1211,7 @@ int nhi_probe(struct tb_nhi *nhi)
 		 */
 		tb_domain_put(tb);
 		nhi_shutdown(nhi);
-		return res;
+		return dev_err_probe(dev, res, "failed to add domain\n");
 	}
 	dev_set_drvdata(dev, tb);
 
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index 0126e38d9396..b8b4b925fe8c 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -2990,7 +2990,8 @@ static int tb_start(struct tb *tb, bool reset)
 
 	tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
 	if (IS_ERR(tb->root_switch))
-		return PTR_ERR(tb->root_switch);
+		return dev_err_probe(tb->nhi->dev, PTR_ERR(tb->root_switch),
+				     "failed to allocate host router\n");
 
 	/*
 	 * ICM firmware upgrade needs running firmware and in native
@@ -3007,14 +3008,14 @@ static int tb_start(struct tb *tb, bool reset)
 	ret = tb_switch_configure(tb->root_switch);
 	if (ret) {
 		tb_switch_put(tb->root_switch);
-		return ret;
+		return dev_err_probe(tb->nhi->dev, ret, "failed to configure host router\n");
 	}
 
 	/* Announce the switch to the world */
 	ret = tb_switch_add(tb->root_switch);
 	if (ret) {
 		tb_switch_put(tb->root_switch);
-		return ret;
+		return dev_err_probe(tb->nhi->dev, ret, "failed to add host router\n");
 	}
 
 	/*

-- 
2.54.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi
  2026-04-28 18:49 ` [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
@ 2026-05-04  6:40   ` Mika Westerberg
  0 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2026-05-04  6:40 UTC (permalink / raw)
  To: Konrad Dybcio
  Cc: Andreas Noever, Mika Westerberg, Yehezkel Bernat, linux-kernel,
	linux-usb, usb4-upstream, Raghavendra Thoorpu, Konrad Dybcio

Hi,

On Tue, Apr 28, 2026 at 08:49:44PM +0200, Konrad Dybcio wrote:
> From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
> 
> Not all USB4/TB implementations are based on a PCIe-attached
> controller. In order to make way for these, start off with moving the
> pci_device reference out of the main tb_nhi structure.
> 
> Encapsulate the existing struct in a new tb_nhi_pci, that shall also
> house all properties that relate to the parent bus. Similarly, any
> other type of controller will be expected to contain tb_nhi as a
> member.
> 
> Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
> ---
>  drivers/thunderbolt/acpi.c      |  14 +--
>  drivers/thunderbolt/ctl.c       |  14 +--
>  drivers/thunderbolt/domain.c    |   2 +-
>  drivers/thunderbolt/eeprom.c    |   2 +-
>  drivers/thunderbolt/icm.c       |  24 ++---
>  drivers/thunderbolt/nhi.c       | 212 ++++++++++++++++++++++++++++------------
>  drivers/thunderbolt/nhi.h       |   1 +
>  drivers/thunderbolt/nhi_ops.c   |  26 ++---
>  drivers/thunderbolt/switch.c    |   6 +-
>  drivers/thunderbolt/tb.c        |  69 -------------
>  drivers/thunderbolt/tb.h        |  10 +-
>  drivers/thunderbolt/usb4_port.c |   2 +-
>  include/linux/thunderbolt.h     |   5 +-
>  13 files changed, 209 insertions(+), 178 deletions(-)
> 
> diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
> index 45d1415871b4..53546bc477a5 100644
> --- a/drivers/thunderbolt/acpi.c
> +++ b/drivers/thunderbolt/acpi.c
> @@ -28,7 +28,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
>  		return AE_OK;
>  
>  	/* It needs to reference this NHI */
> -	if (dev_fwnode(&nhi->pdev->dev) != fwnode)
> +	if (dev_fwnode(nhi->dev) != fwnode)
>  		goto out_put;
>  
>  	/*
> @@ -57,16 +57,16 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
>  		 */
>  		pm_runtime_get_sync(&pdev->dev);
>  
> -		link = device_link_add(&pdev->dev, &nhi->pdev->dev,
> +		link = device_link_add(&pdev->dev, nhi->dev,
>  				       DL_FLAG_AUTOREMOVE_SUPPLIER |
>  				       DL_FLAG_RPM_ACTIVE |
>  				       DL_FLAG_PM_RUNTIME);
>  		if (link) {
> -			dev_dbg(&nhi->pdev->dev, "created link from %s\n",
> +			dev_dbg(nhi->dev, "created link from %s\n",
>  				dev_name(&pdev->dev));
>  			*(bool *)ret = true;
>  		} else {
> -			dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
> +			dev_warn(nhi->dev, "device link creation from %s failed\n",
>  				 dev_name(&pdev->dev));
>  		}
>  
> @@ -93,7 +93,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
>  	acpi_status status;
>  	bool ret = false;
>  
> -	if (!has_acpi_companion(&nhi->pdev->dev))
> +	if (!has_acpi_companion(nhi->dev))
>  		return false;
>  
>  	/*
> @@ -103,7 +103,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
>  	status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 32,
>  				     tb_acpi_add_link, NULL, nhi, (void **)&ret);
>  	if (ACPI_FAILURE(status)) {
> -		dev_warn(&nhi->pdev->dev, "failed to enumerate tunneled ports\n");
> +		dev_warn(nhi->dev, "failed to enumerate tunneled ports\n");
>  		return false;
>  	}
>  
> @@ -305,7 +305,7 @@ static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
>  		struct tb_nhi *nhi = sw->tb->nhi;
>  		struct acpi_device *parent_adev;
>  
> -		parent_adev = ACPI_COMPANION(&nhi->pdev->dev);
> +		parent_adev = ACPI_COMPANION(nhi->dev);
>  		if (parent_adev)
>  			adev = acpi_find_child_device(parent_adev, 0, false);
>  	}
> diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
> index b2fd60fc7bcc..a22c8c2a301d 100644
> --- a/drivers/thunderbolt/ctl.c
> +++ b/drivers/thunderbolt/ctl.c
> @@ -56,22 +56,22 @@ struct tb_ctl {
>  
>  
>  #define tb_ctl_WARN(ctl, format, arg...) \
> -	dev_WARN(&(ctl)->nhi->pdev->dev, format, ## arg)
> +	dev_WARN((ctl)->nhi->dev, format, ## arg)
>  
>  #define tb_ctl_err(ctl, format, arg...) \
> -	dev_err(&(ctl)->nhi->pdev->dev, format, ## arg)
> +	dev_err((ctl)->nhi->dev, format, ## arg)
>  
>  #define tb_ctl_warn(ctl, format, arg...) \
> -	dev_warn(&(ctl)->nhi->pdev->dev, format, ## arg)
> +	dev_warn((ctl)->nhi->dev, format, ## arg)
>  
>  #define tb_ctl_info(ctl, format, arg...) \
> -	dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)
> +	dev_info((ctl)->nhi->dev, format, ## arg)
>  
>  #define tb_ctl_dbg(ctl, format, arg...) \
> -	dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg)
> +	dev_dbg((ctl)->nhi->dev, format, ## arg)
>  
>  #define tb_ctl_dbg_once(ctl, format, arg...) \
> -	dev_dbg_once(&(ctl)->nhi->pdev->dev, format, ## arg)
> +	dev_dbg_once((ctl)->nhi->dev, format, ## arg)
>  
>  static DECLARE_WAIT_QUEUE_HEAD(tb_cfg_request_cancel_queue);
>  /* Serializes access to request kref_get/put */
> @@ -666,7 +666,7 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int index, int timeout_msec,
>  
>  	mutex_init(&ctl->request_queue_lock);
>  	INIT_LIST_HEAD(&ctl->request_queue);
> -	ctl->frame_pool = dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev,
> +	ctl->frame_pool = dma_pool_create("thunderbolt_ctl", nhi->dev,
>  					 TB_FRAME_SIZE, 4, 0);
>  	if (!ctl->frame_pool)
>  		goto err;
> diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
> index 317780b99992..8e332a9ad625 100644
> --- a/drivers/thunderbolt/domain.c
> +++ b/drivers/thunderbolt/domain.c
> @@ -402,7 +402,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize
>  	if (!tb->ctl)
>  		goto err_destroy_wq;
>  
> -	tb->dev.parent = &nhi->pdev->dev;
> +	tb->dev.parent = nhi->dev;
>  	tb->dev.bus = &tb_bus_type;
>  	tb->dev.type = &tb_domain_type;
>  	tb->dev.groups = domain_attr_groups;
> diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
> index 5477b9437048..5681c17f82ec 100644
> --- a/drivers/thunderbolt/eeprom.c
> +++ b/drivers/thunderbolt/eeprom.c
> @@ -465,7 +465,7 @@ static void tb_switch_drom_free(struct tb_switch *sw)
>   */
>  static int tb_drom_copy_efi(struct tb_switch *sw, u16 *size)
>  {
> -	struct device *dev = &sw->tb->nhi->pdev->dev;
> +	struct device *dev = sw->tb->nhi->dev;
>  	int len, res;
>  
>  	len = device_property_count_u8(dev, "ThunderboltDROM");
> diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
> index 9d95bf3ab44c..97c33752a075 100644
> --- a/drivers/thunderbolt/icm.c
> +++ b/drivers/thunderbolt/icm.c
> @@ -1455,6 +1455,7 @@ static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
>  
>  static bool icm_ar_is_supported(struct tb *tb)
>  {
> +	struct pci_dev *pdev = to_pci_dev(tb->nhi->dev);
>  	struct pci_dev *upstream_port;
>  	struct icm *icm = tb_priv(tb);
>  
> @@ -1472,7 +1473,7 @@ static bool icm_ar_is_supported(struct tb *tb)
>  	 * Find the upstream PCIe port in case we need to do reset
>  	 * through its vendor specific registers.
>  	 */
> -	upstream_port = get_upstream_port(tb->nhi->pdev);
> +	upstream_port = get_upstream_port(pdev);
>  	if (upstream_port) {
>  		int cap;
>  
> @@ -1508,7 +1509,7 @@ static int icm_ar_get_mode(struct tb *tb)
>  	} while (--retries);
>  
>  	if (!retries) {
> -		dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n");
> +		dev_err(nhi->dev, "ICM firmware not authenticated\n");
>  		return -ENODEV;
>  	}
>  
> @@ -1674,11 +1675,11 @@ icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level,
>  
>  static void icm_icl_set_uuid(struct tb *tb)
>  {
> -	struct tb_nhi *nhi = tb->nhi;
> +	struct pci_dev *pdev = to_pci_dev(tb->nhi->dev);
>  	u32 uuid[4];
>  
> -	pci_read_config_dword(nhi->pdev, VS_CAP_10, &uuid[0]);
> -	pci_read_config_dword(nhi->pdev, VS_CAP_11, &uuid[1]);
> +	pci_read_config_dword(pdev, VS_CAP_10, &uuid[0]);
> +	pci_read_config_dword(pdev, VS_CAP_11, &uuid[1]);
>  	uuid[2] = 0xffffffff;
>  	uuid[3] = 0xffffffff;
>  
> @@ -1853,7 +1854,7 @@ static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
>  	if (icm_firmware_running(nhi))
>  		return 0;
>  
> -	dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n");
> +	dev_dbg(nhi->dev, "starting ICM firmware\n");
>  
>  	ret = icm_firmware_reset(tb, nhi);
>  	if (ret)
> @@ -1948,7 +1949,7 @@ static int icm_firmware_init(struct tb *tb)
>  
>  	ret = icm_firmware_start(tb, nhi);
>  	if (ret) {
> -		dev_err(&nhi->pdev->dev, "could not start ICM firmware\n");
> +		dev_err(nhi->dev, "could not start ICM firmware\n");
>  		return ret;
>  	}
>  
> @@ -1980,10 +1981,10 @@ static int icm_firmware_init(struct tb *tb)
>  	 */
>  	ret = icm_reset_phy_port(tb, 0);
>  	if (ret)
> -		dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n");
> +		dev_warn(nhi->dev, "failed to reset links on port0\n");
>  	ret = icm_reset_phy_port(tb, 1);
>  	if (ret)
> -		dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n");
> +		dev_warn(nhi->dev, "failed to reset links on port1\n");
>  
>  	return 0;
>  }
> @@ -2462,6 +2463,7 @@ static const struct tb_cm_ops icm_icl_ops = {
>  
>  struct tb *icm_probe(struct tb_nhi *nhi)
>  {
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	struct icm *icm;
>  	struct tb *tb;
>  
> @@ -2473,7 +2475,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
>  	INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work);
>  	mutex_init(&icm->request_lock);
>  
> -	switch (nhi->pdev->device) {
> +	switch (pdev->device) {
>  	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
>  	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
>  		icm->can_upgrade_nvm = true;
> @@ -2579,7 +2581,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
>  	}
>  
>  	if (!icm->is_supported || !icm->is_supported(tb)) {
> -		dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n");
> +		dev_dbg(nhi->dev, "ICM not supported on this controller\n");
>  		tb_domain_put(tb);
>  		return NULL;
>  	}
> diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
> index 2bb2e79ca3cb..2d01e698dd65 100644
> --- a/drivers/thunderbolt/nhi.c
> +++ b/drivers/thunderbolt/nhi.c
> @@ -2,7 +2,7 @@
>  /*
>   * Thunderbolt driver - NHI driver
>   *
> - * The NHI (native host interface) is the pci device that allows us to send and
> + * The NHI (native host interface) is the device that allows us to send and
>   * receive frames from the thunderbolt bus.
>   *
>   * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
> @@ -12,12 +12,12 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/slab.h>
>  #include <linux/errno.h>
> -#include <linux/pci.h>
>  #include <linux/dma-mapping.h>
>  #include <linux/interrupt.h>
>  #include <linux/iommu.h>
>  #include <linux/module.h>
>  #include <linux/delay.h>
> +#include <linux/platform_data/x86/apple.h>
>  #include <linux/property.h>
>  #include <linux/string_choices.h>
>  #include <linux/string_helpers.h>
> @@ -51,6 +51,16 @@ static bool host_reset = true;
>  module_param(host_reset, bool, 0444);
>  MODULE_PARM_DESC(host_reset, "reset USB4 host router (default: true)");
>  

Let's add kernel-doc here.

> +struct tb_nhi_pci {
> +	struct tb_nhi nhi;
> +	struct ida msix_ida;
> +};
> +
> +static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
> +{
> +	return container_of(nhi, struct tb_nhi_pci, nhi);
> +}
> +
>  static int ring_interrupt_index(const struct tb_ring *ring)
>  {
>  	int bit = ring->hop;
> @@ -139,12 +149,12 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
>  	else
>  		new = old & ~mask;
>  
> -	dev_dbg(&ring->nhi->pdev->dev,
> +	dev_dbg(ring->nhi->dev,
>  		"%s interrupt at register %#x bit %d (%#x -> %#x)\n",
>  		active ? "enabling" : "disabling", reg, interrupt_bit, old, new);
>  
>  	if (new == old)
> -		dev_WARN(&ring->nhi->pdev->dev,
> +		dev_WARN(ring->nhi->dev,
>  					 "interrupt for %s %d is already %s\n",

You can combine these two lines.

>  					 RING_TYPE(ring), ring->hop,
>  					 str_enabled_disabled(active));
> @@ -462,19 +472,21 @@ static irqreturn_t ring_msix(int irq, void *data)
>  static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
>  {
>  	struct tb_nhi *nhi = ring->nhi;
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	unsigned long irqflags;
>  	int ret;
>  
> -	if (!nhi->pdev->msix_enabled)
> +	if (!pdev->msix_enabled)
>  		return 0;
>  
> -	ret = ida_alloc_max(&nhi->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
> +	ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
>  	if (ret < 0)
>  		return ret;
>  
>  	ring->vector = ret;
>  
> -	ret = pci_irq_vector(ring->nhi->pdev, ring->vector);
> +	ret = pci_irq_vector(pdev, ring->vector);
>  	if (ret < 0)
>  		goto err_ida_remove;
>  
> @@ -488,18 +500,20 @@ static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
>  	return 0;
>  
>  err_ida_remove:
> -	ida_free(&nhi->msix_ida, ring->vector);
> +	ida_free(&nhi_pci->msix_ida, ring->vector);
>  
>  	return ret;
>  }
>  
>  static void ring_release_msix(struct tb_ring *ring)
>  {
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
> +
>  	if (ring->irq <= 0)
>  		return;
>  
>  	free_irq(ring->irq, ring);
> -	ida_free(&ring->nhi->msix_ida, ring->vector);
> +	ida_free(&nhi_pci->msix_ida, ring->vector);
>  	ring->vector = 0;
>  	ring->irq = 0;
>  }
> @@ -512,7 +526,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
>  	if (nhi->quirks & QUIRK_E2E) {
>  		start_hop = RING_FIRST_USABLE_HOPID + 1;
>  		if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
> -			dev_dbg(&nhi->pdev->dev, "quirking E2E TX HopID %u -> %u\n",
> +			dev_dbg(nhi->dev, "quirking E2E TX HopID %u -> %u\n",
>  				ring->e2e_tx_hop, RING_E2E_RESERVED_HOPID);
>  			ring->e2e_tx_hop = RING_E2E_RESERVED_HOPID;
>  		}
> @@ -543,23 +557,23 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
>  	}
>  
>  	if (ring->hop > 0 && ring->hop < start_hop) {
> -		dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
> +		dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop);
>  		ret = -EINVAL;
>  		goto err_unlock;
>  	}
>  	if (ring->hop < 0 || ring->hop >= nhi->hop_count) {
> -		dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
> +		dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop);
>  		ret = -EINVAL;
>  		goto err_unlock;
>  	}
>  	if (ring->is_tx && nhi->tx_rings[ring->hop]) {
> -		dev_warn(&nhi->pdev->dev, "TX hop %d already allocated\n",
> +		dev_warn(nhi->dev, "TX hop %d already allocated\n",
>  			 ring->hop);
>  		ret = -EBUSY;
>  		goto err_unlock;
>  	}
>  	if (!ring->is_tx && nhi->rx_rings[ring->hop]) {
> -		dev_warn(&nhi->pdev->dev, "RX hop %d already allocated\n",
> +		dev_warn(nhi->dev, "RX hop %d already allocated\n",
>  			 ring->hop);
>  		ret = -EBUSY;
>  		goto err_unlock;
> @@ -584,7 +598,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
>  {
>  	struct tb_ring *ring = NULL;
>  
> -	dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
> +	dev_dbg(nhi->dev, "allocating %s ring %d of size %d\n",
>  		transmit ? "TX" : "RX", hop, size);
>  
>  	ring = kzalloc_obj(*ring);
> @@ -610,7 +624,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
>  	ring->start_poll = start_poll;
>  	ring->poll_data = poll_data;
>  
> -	ring->descriptors = dma_alloc_coherent(&ring->nhi->pdev->dev,
> +	ring->descriptors = dma_alloc_coherent(ring->nhi->dev,
>  			size * sizeof(*ring->descriptors),
>  			&ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO);
>  	if (!ring->descriptors)
> @@ -627,7 +641,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
>  err_release_msix:
>  	ring_release_msix(ring);
>  err_free_descs:
> -	dma_free_coherent(&ring->nhi->pdev->dev,
> +	dma_free_coherent(ring->nhi->dev,
>  			  ring->size * sizeof(*ring->descriptors),
>  			  ring->descriptors, ring->descriptors_dma);
>  err_free_ring:
> @@ -694,10 +708,10 @@ void tb_ring_start(struct tb_ring *ring)
>  	if (ring->nhi->going_away)
>  		goto err;
>  	if (ring->running) {
> -		dev_WARN(&ring->nhi->pdev->dev, "ring already started\n");
> +		dev_WARN(ring->nhi->dev, "ring already started\n");
>  		goto err;
>  	}
> -	dev_dbg(&ring->nhi->pdev->dev, "starting %s %d\n",
> +	dev_dbg(ring->nhi->dev, "starting %s %d\n",
>  		RING_TYPE(ring), ring->hop);
>  
>  	if (ring->flags & RING_FLAG_FRAME) {
> @@ -734,11 +748,11 @@ void tb_ring_start(struct tb_ring *ring)
>  			hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
>  			flags |= hop;
>  
> -			dev_dbg(&ring->nhi->pdev->dev,
> +			dev_dbg(ring->nhi->dev,
>  				"enabling E2E for %s %d with TX HopID %d\n",
>  				RING_TYPE(ring), ring->hop, ring->e2e_tx_hop);
>  		} else {
> -			dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n",
> +			dev_dbg(ring->nhi->dev, "enabling E2E for %s %d\n",
>  				RING_TYPE(ring), ring->hop);
>  		}
>  
> @@ -772,12 +786,12 @@ void tb_ring_stop(struct tb_ring *ring)
>  {
>  	spin_lock_irq(&ring->nhi->lock);
>  	spin_lock(&ring->lock);
> -	dev_dbg(&ring->nhi->pdev->dev, "stopping %s %d\n",
> +	dev_dbg(ring->nhi->dev, "stopping %s %d\n",
>  		RING_TYPE(ring), ring->hop);
>  	if (ring->nhi->going_away)
>  		goto err;
>  	if (!ring->running) {
> -		dev_WARN(&ring->nhi->pdev->dev, "%s %d already stopped\n",
> +		dev_WARN(ring->nhi->dev, "%s %d already stopped\n",
>  			 RING_TYPE(ring), ring->hop);
>  		goto err;
>  	}
> @@ -826,14 +840,14 @@ void tb_ring_free(struct tb_ring *ring)
>  		ring->nhi->rx_rings[ring->hop] = NULL;
>  
>  	if (ring->running) {
> -		dev_WARN(&ring->nhi->pdev->dev, "%s %d still running\n",
> +		dev_WARN(ring->nhi->dev, "%s %d still running\n",
>  			 RING_TYPE(ring), ring->hop);
>  	}
>  	spin_unlock_irq(&ring->nhi->lock);
>  
>  	ring_release_msix(ring);
>  
> -	dma_free_coherent(&ring->nhi->pdev->dev,
> +	dma_free_coherent(ring->nhi->dev,
>  			  ring->size * sizeof(*ring->descriptors),
>  			  ring->descriptors, ring->descriptors_dma);
>  
> @@ -841,7 +855,7 @@ void tb_ring_free(struct tb_ring *ring)
>  	ring->descriptors_dma = 0;
>  
>  
> -	dev_dbg(&ring->nhi->pdev->dev, "freeing %s %d\n", RING_TYPE(ring),
> +	dev_dbg(ring->nhi->dev, "freeing %s %d\n", RING_TYPE(ring),
>  		ring->hop);
>  
>  	/*
> @@ -940,7 +954,7 @@ static void nhi_interrupt_work(struct work_struct *work)
>  		if ((value & (1 << (bit % 32))) == 0)
>  			continue;
>  		if (type == 2) {
> -			dev_warn(&nhi->pdev->dev,
> +			dev_warn(nhi->dev,
>  				 "RX overflow for ring %d\n",
>  				 hop);

I think here too.

>  			continue;
> @@ -950,7 +964,7 @@ static void nhi_interrupt_work(struct work_struct *work)
>  		else
>  			ring = nhi->rx_rings[hop];
>  		if (ring == NULL) {
> -			dev_warn(&nhi->pdev->dev,
> +			dev_warn(nhi->dev,
>  				 "got interrupt for inactive %s ring %d\n",
>  				 type ? "RX" : "TX",
>  				 hop);
> @@ -1139,16 +1153,18 @@ static int nhi_runtime_resume(struct device *dev)
>  
>  static void nhi_shutdown(struct tb_nhi *nhi)
>  {
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	int i;
>  
> -	dev_dbg(&nhi->pdev->dev, "shutdown\n");
> +	dev_dbg(nhi->dev, "shutdown\n");
>  
>  	for (i = 0; i < nhi->hop_count; i++) {
>  		if (nhi->tx_rings[i])
> -			dev_WARN(&nhi->pdev->dev,
> +			dev_WARN(nhi->dev,
>  				 "TX ring %d is still active\n", i);
>  		if (nhi->rx_rings[i])
> -			dev_WARN(&nhi->pdev->dev,
> +			dev_WARN(nhi->dev,
>  				 "RX ring %d is still active\n", i);
>  	}
>  	nhi_disable_interrupts(nhi);
> @@ -1156,19 +1172,22 @@ static void nhi_shutdown(struct tb_nhi *nhi)
>  	 * We have to release the irq before calling flush_work. Otherwise an
>  	 * already executing IRQ handler could call schedule_work again.
>  	 */
> -	if (!nhi->pdev->msix_enabled) {
> -		devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi);
> +	if (!pdev->msix_enabled) {
> +		devm_free_irq(nhi->dev, pdev->irq, nhi);
>  		flush_work(&nhi->interrupt_work);
>  	}
> -	ida_destroy(&nhi->msix_ida);
> +	ida_destroy(&nhi_pci->msix_ida);
>  
>  	if (nhi->ops && nhi->ops->shutdown)
>  		nhi->ops->shutdown(nhi);
>  }
>  
> -static void nhi_check_quirks(struct tb_nhi *nhi)
> +static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci)
>  {
> -	if (nhi->pdev->vendor == PCI_VENDOR_ID_INTEL) {
> +	struct tb_nhi *nhi = &nhi_pci->nhi;
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +
> +	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
>  		/*
>  		 * Intel hardware supports auto clear of the interrupt
>  		 * status register right after interrupt is being
> @@ -1176,7 +1195,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
>  		 */
>  		nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
>  
> -		switch (nhi->pdev->device) {
> +		switch (pdev->device) {
>  		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
>  		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
>  			/*
> @@ -1190,7 +1209,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
>  	}
>  }
>  
> -static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
> +static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data)
>  {
>  	if (!pdev->external_facing ||
>  	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
> @@ -1199,9 +1218,11 @@ static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
>  	return 1; /* Stop walking */
>  }
>  
> -static void nhi_check_iommu(struct tb_nhi *nhi)
> +static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci)
>  {
> -	struct pci_bus *bus = nhi->pdev->bus;
> +	struct tb_nhi *nhi = &nhi_pci->nhi;
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +	struct pci_bus *bus = pdev->bus;
>  	bool port_ok = false;
>  
>  	/*
> @@ -1224,10 +1245,10 @@ static void nhi_check_iommu(struct tb_nhi *nhi)
>  	while (bus->parent)
>  		bus = bus->parent;
>  
> -	pci_walk_bus(bus, nhi_check_iommu_pdev, &port_ok);
> +	pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok);
>  
>  	nhi->iommu_dma_protection = port_ok;
> -	dev_dbg(&nhi->pdev->dev, "IOMMU DMA protection is %s\n",
> +	dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
>  		str_enabled_disabled(port_ok));
>  }
>  
> @@ -1242,7 +1263,7 @@ static void nhi_reset(struct tb_nhi *nhi)
>  		return;
>  
>  	if (!host_reset) {
> -		dev_dbg(&nhi->pdev->dev, "skipping host router reset\n");
> +		dev_dbg(nhi->dev, "skipping host router reset\n");
>  		return;
>  	}
>  
> @@ -1253,27 +1274,23 @@ static void nhi_reset(struct tb_nhi *nhi)
>  	do {
>  		val = ioread32(nhi->iobase + REG_RESET);
>  		if (!(val & REG_RESET_HRR)) {
> -			dev_warn(&nhi->pdev->dev, "host router reset successful\n");
> +			dev_warn(nhi->dev, "host router reset successful\n");
>  			return;
>  		}
>  		usleep_range(10, 20);
>  	} while (ktime_before(ktime_get(), timeout));
>  
> -	dev_warn(&nhi->pdev->dev, "timeout resetting host router\n");
> +	dev_warn(nhi->dev, "timeout resetting host router\n");
>  }
>  
> -static int nhi_init_msi(struct tb_nhi *nhi)
> +static int nhi_init_msi(struct tb_nhi_pci *nhi_pci)
>  {
> -	struct pci_dev *pdev = nhi->pdev;
> +	struct tb_nhi *nhi = &nhi_pci->nhi;
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	struct device *dev = &pdev->dev;
>  	int res, irq, nvec;
>  
> -	/* In case someone left them on. */
> -	nhi_disable_interrupts(nhi);
> -
> -	nhi_enable_int_throttling(nhi);
> -
> -	ida_init(&nhi->msix_ida);
> +	ida_init(&nhi_pci->msix_ida);
>  
>  	/*
>  	 * The NHI has 16 MSI-X vectors or a single MSI. We first try to
> @@ -1290,7 +1307,7 @@ static int nhi_init_msi(struct tb_nhi *nhi)
>  
>  		INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
>  
> -		irq = pci_irq_vector(nhi->pdev, 0);
> +		irq = pci_irq_vector(pdev, 0);
>  		if (irq < 0)
>  			return irq;
>  
> @@ -1339,6 +1356,7 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi)
>  static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  {
>  	struct device *dev = &pdev->dev;
> +	struct tb_nhi_pci *nhi_pci;
>  	struct tb_nhi *nhi;
>  	struct tb *tb;
>  	int res;
> @@ -1350,11 +1368,12 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  	if (res)
>  		return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
>  
> -	nhi = devm_kzalloc(&pdev->dev, sizeof(*nhi), GFP_KERNEL);
> -	if (!nhi)
> +	nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
> +	if (!nhi_pci)
>  		return -ENOMEM;
>  
> -	nhi->pdev = pdev;
> +	nhi = &nhi_pci->nhi;
> +	nhi->dev = dev;
>  	nhi->ops = (const struct tb_nhi_ops *)id->driver_data;
>  
>  	nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
> @@ -1372,11 +1391,15 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  	if (!nhi->tx_rings || !nhi->rx_rings)
>  		return -ENOMEM;
>  
> -	nhi_check_quirks(nhi);
> -	nhi_check_iommu(nhi);
> +	nhi_check_quirks(nhi_pci);
> +	nhi_check_iommu(nhi_pci);
>  	nhi_reset(nhi);
>  
> -	res = nhi_init_msi(nhi);
> +	/* In case someone left them on. */
> +	nhi_disable_interrupts(nhi);
> +	nhi_enable_int_throttling(nhi);
> +
> +	res = nhi_init_msi(nhi_pci);
>  	if (res)
>  		return dev_err_probe(dev, res, "cannot enable MSI, aborting\n");
>  
> @@ -1458,6 +1481,75 @@ static const struct dev_pm_ops nhi_pm_ops = {
>  	.runtime_resume = nhi_runtime_resume,
>  };
>  
> +/*
> + * During suspend the Thunderbolt controller is reset and all PCIe
> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
> + * during resume. This adds device links between the tunneled PCIe
> + * downstream ports and the NHI so that the device core will make sure
> + * NHI is resumed first before the rest.
> + */
> +bool tb_apple_add_links(struct tb_nhi *nhi)

This can be moved into pci.c completely, I would think that all USB4 hosts
use device properties.

> +{
> +	struct pci_dev *nhi_pdev = to_pci_dev(nhi->dev);
> +	struct pci_dev *upstream, *pdev;
> +	bool ret;
> +
> +	if (!x86_apple_machine)
> +		return false;
> +
> +	switch (nhi_pdev->device) {
> +	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
> +	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
> +	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
> +	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
> +		break;
> +	default:
> +		return false;
> +	}
> +
> +	upstream = pci_upstream_bridge(nhi_pdev);
> +	while (upstream) {
> +		if (!pci_is_pcie(upstream))
> +			return false;
> +		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
> +			break;
> +		upstream = pci_upstream_bridge(upstream);
> +	}
> +
> +	if (!upstream)
> +		return false;
> +
> +	/*
> +	 * For each hotplug downstream port, create add device link
> +	 * back to NHI so that PCIe tunnels can be re-established after
> +	 * sleep.
> +	 */
> +	ret = false;
> +	for_each_pci_bridge(pdev, upstream->subordinate) {
> +		const struct device_link *link;
> +
> +		if (!pci_is_pcie(pdev))
> +			continue;
> +		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
> +		    !pdev->is_pciehp)
> +			continue;
> +
> +		link = device_link_add(&pdev->dev, nhi->dev,
> +				       DL_FLAG_AUTOREMOVE_SUPPLIER |
> +				       DL_FLAG_PM_RUNTIME);
> +		if (link) {
> +			dev_dbg(nhi->dev, "created link from %s\n",
> +				dev_name(&pdev->dev));
> +			ret = true;
> +		} else {
> +			dev_warn(nhi->dev, "device link creation from %s failed\n",
> +				 dev_name(&pdev->dev));
> +		}
> +	}
> +
> +	return ret;
> +}
> +
>  static struct pci_device_id nhi_ids[] = {
>  	/*
>  	 * We have to specify class, the TB bridges use the same device and
> diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
> index 24ac4246d0ca..efcd119e26f8 100644
> --- a/drivers/thunderbolt/nhi.h
> +++ b/drivers/thunderbolt/nhi.h
> @@ -29,6 +29,7 @@ enum nhi_mailbox_cmd {
>  
>  int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data);
>  enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi);
> +bool tb_apple_add_links(struct tb_nhi *nhi);

Then this does not need to be in nhi.h.

>  
>  /**
>   * struct tb_nhi_ops - NHI specific optional operations
> diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c
> index 96da07e88c52..8c50066f3411 100644
> --- a/drivers/thunderbolt/nhi_ops.c
> +++ b/drivers/thunderbolt/nhi_ops.c
> @@ -24,7 +24,7 @@ static int check_for_device(struct device *dev, void *data)
>  
>  static bool icl_nhi_is_device_connected(struct tb_nhi *nhi)
>  {
> -	struct tb *tb = pci_get_drvdata(nhi->pdev);
> +	struct tb *tb = dev_get_drvdata(nhi->dev);
>  	int ret;
>  
>  	ret = device_for_each_child(&tb->root_switch->dev, NULL,
> @@ -34,6 +34,7 @@ static bool icl_nhi_is_device_connected(struct tb_nhi *nhi)
>  
>  static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
>  {
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	u32 vs_cap;
>  
>  	/*
> @@ -48,7 +49,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
>  	 * The actual power management happens inside shared ACPI power
>  	 * resources using standard ACPI methods.
>  	 */
> -	pci_read_config_dword(nhi->pdev, VS_CAP_22, &vs_cap);
> +	pci_read_config_dword(pdev, VS_CAP_22, &vs_cap);
>  	if (power) {
>  		vs_cap &= ~VS_CAP_22_DMA_DELAY_MASK;
>  		vs_cap |= 0x22 << VS_CAP_22_DMA_DELAY_SHIFT;
> @@ -56,7 +57,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
>  	} else {
>  		vs_cap &= ~VS_CAP_22_FORCE_POWER;
>  	}
> -	pci_write_config_dword(nhi->pdev, VS_CAP_22, vs_cap);
> +	pci_write_config_dword(pdev, VS_CAP_22, vs_cap);
>  
>  	if (power) {
>  		unsigned int retries = 350;
> @@ -64,7 +65,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
>  
>  		/* Wait until the firmware tells it is up and running */
>  		do {
> -			pci_read_config_dword(nhi->pdev, VS_CAP_9, &val);
> +			pci_read_config_dword(pdev, VS_CAP_9, &val);
>  			if (val & VS_CAP_9_FW_READY)
>  				return 0;
>  			usleep_range(3000, 3100);
> @@ -78,14 +79,16 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
>  
>  static void icl_nhi_lc_mailbox_cmd(struct tb_nhi *nhi, enum icl_lc_mailbox_cmd cmd)
>  {
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	u32 data;
>  
>  	data = (cmd << VS_CAP_19_CMD_SHIFT) & VS_CAP_19_CMD_MASK;
> -	pci_write_config_dword(nhi->pdev, VS_CAP_19, data | VS_CAP_19_VALID);
> +	pci_write_config_dword(pdev, VS_CAP_19, data | VS_CAP_19_VALID);
>  }
>  
>  static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
>  {
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	unsigned long end;
>  	u32 data;
>  
> @@ -94,7 +97,7 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
>  
>  	end = jiffies + msecs_to_jiffies(timeout);
>  	do {
> -		pci_read_config_dword(nhi->pdev, VS_CAP_18, &data);
> +		pci_read_config_dword(pdev, VS_CAP_18, &data);
>  		if (data & VS_CAP_18_DONE)
>  			goto clear;
>  		usleep_range(1000, 1100);
> @@ -104,24 +107,25 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
>  
>  clear:
>  	/* Clear the valid bit */
> -	pci_write_config_dword(nhi->pdev, VS_CAP_19, 0);
> +	pci_write_config_dword(pdev, VS_CAP_19, 0);
>  	return 0;
>  }
>  
>  static void icl_nhi_set_ltr(struct tb_nhi *nhi)
>  {
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
>  	u32 max_ltr, ltr;
>  
> -	pci_read_config_dword(nhi->pdev, VS_CAP_16, &max_ltr);
> +	pci_read_config_dword(pdev, VS_CAP_16, &max_ltr);
>  	max_ltr &= 0xffff;
>  	/* Program the same value for both snoop and no-snoop */
>  	ltr = max_ltr << 16 | max_ltr;
> -	pci_write_config_dword(nhi->pdev, VS_CAP_15, ltr);
> +	pci_write_config_dword(pdev, VS_CAP_15, ltr);
>  }
>  
>  static int icl_nhi_suspend(struct tb_nhi *nhi)
>  {
> -	struct tb *tb = pci_get_drvdata(nhi->pdev);
> +	struct tb *tb = dev_get_drvdata(nhi->dev);
>  	int ret;
>  
>  	if (icl_nhi_is_device_connected(nhi))
> @@ -144,7 +148,7 @@ static int icl_nhi_suspend(struct tb_nhi *nhi)
>  
>  static int icl_nhi_suspend_noirq(struct tb_nhi *nhi, bool wakeup)
>  {
> -	struct tb *tb = pci_get_drvdata(nhi->pdev);
> +	struct tb *tb = dev_get_drvdata(nhi->dev);
>  	enum icl_lc_mailbox_cmd cmd;
>  
>  	if (!pm_suspend_via_firmware())
> diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
> index c2ad58b19e7b..0680209e349c 100644
> --- a/drivers/thunderbolt/switch.c
> +++ b/drivers/thunderbolt/switch.c
> @@ -211,6 +211,7 @@ static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
>  
>  static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
>  {
> +	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
>  	struct pci_dev *root_port;
>  
>  	/*
> @@ -219,16 +220,17 @@ static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
>  	 * itself. To be on the safe side keep the root port in D0 during
>  	 * the whole upgrade process.
>  	 */
> -	root_port = pcie_find_root_port(sw->tb->nhi->pdev);
> +	root_port = pcie_find_root_port(pdev);
>  	if (root_port)
>  		pm_runtime_get_noresume(&root_port->dev);
>  }
>  
>  static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
>  {
> +	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
>  	struct pci_dev *root_port;
>  
> -	root_port = pcie_find_root_port(sw->tb->nhi->pdev);
> +	root_port = pcie_find_root_port(pdev);
>  	if (root_port)
>  		pm_runtime_put(&root_port->dev);
>  }
> diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
> index c69c323e6952..0126e38d9396 100644
> --- a/drivers/thunderbolt/tb.c
> +++ b/drivers/thunderbolt/tb.c
> @@ -10,7 +10,6 @@
>  #include <linux/errno.h>
>  #include <linux/delay.h>
>  #include <linux/pm_runtime.h>
> -#include <linux/platform_data/x86/apple.h>
>  
>  #include "tb.h"
>  #include "tb_regs.h"
> @@ -3295,74 +3294,6 @@ static const struct tb_cm_ops tb_cm_ops = {
>  	.disconnect_xdomain_paths = tb_disconnect_xdomain_paths,
>  };
>  
> -/*
> - * During suspend the Thunderbolt controller is reset and all PCIe
> - * tunnels are lost. The NHI driver will try to reestablish all tunnels
> - * during resume. This adds device links between the tunneled PCIe
> - * downstream ports and the NHI so that the device core will make sure
> - * NHI is resumed first before the rest.
> - */
> -static bool tb_apple_add_links(struct tb_nhi *nhi)
> -{
> -	struct pci_dev *upstream, *pdev;
> -	bool ret;
> -
> -	if (!x86_apple_machine)
> -		return false;
> -
> -	switch (nhi->pdev->device) {
> -	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
> -	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
> -	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
> -	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
> -		break;
> -	default:
> -		return false;
> -	}
> -
> -	upstream = pci_upstream_bridge(nhi->pdev);
> -	while (upstream) {
> -		if (!pci_is_pcie(upstream))
> -			return false;
> -		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
> -			break;
> -		upstream = pci_upstream_bridge(upstream);
> -	}
> -
> -	if (!upstream)
> -		return false;
> -
> -	/*
> -	 * For each hotplug downstream port, create add device link
> -	 * back to NHI so that PCIe tunnels can be re-established after
> -	 * sleep.
> -	 */
> -	ret = false;
> -	for_each_pci_bridge(pdev, upstream->subordinate) {
> -		const struct device_link *link;
> -
> -		if (!pci_is_pcie(pdev))
> -			continue;
> -		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
> -		    !pdev->is_pciehp)
> -			continue;
> -
> -		link = device_link_add(&pdev->dev, &nhi->pdev->dev,
> -				       DL_FLAG_AUTOREMOVE_SUPPLIER |
> -				       DL_FLAG_PM_RUNTIME);
> -		if (link) {
> -			dev_dbg(&nhi->pdev->dev, "created link from %s\n",
> -				dev_name(&pdev->dev));
> -			ret = true;
> -		} else {
> -			dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
> -				 dev_name(&pdev->dev));
> -		}
> -	}
> -
> -	return ret;
> -}
> -
>  struct tb *tb_probe(struct tb_nhi *nhi)
>  {
>  	struct tb_cm *tcm;
> diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
> index 217c3114bec8..4e11060e144b 100644
> --- a/drivers/thunderbolt/tb.h
> +++ b/drivers/thunderbolt/tb.h
> @@ -725,11 +725,11 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
>  			    length);
>  }
>  
> -#define tb_err(tb, fmt, arg...) dev_err(&(tb)->nhi->pdev->dev, fmt, ## arg)
> -#define tb_WARN(tb, fmt, arg...) dev_WARN(&(tb)->nhi->pdev->dev, fmt, ## arg)
> -#define tb_warn(tb, fmt, arg...) dev_warn(&(tb)->nhi->pdev->dev, fmt, ## arg)
> -#define tb_info(tb, fmt, arg...) dev_info(&(tb)->nhi->pdev->dev, fmt, ## arg)
> -#define tb_dbg(tb, fmt, arg...) dev_dbg(&(tb)->nhi->pdev->dev, fmt, ## arg)
> +#define tb_err(tb, fmt, arg...) dev_err((tb)->nhi->dev, fmt, ## arg)
> +#define tb_WARN(tb, fmt, arg...) dev_WARN((tb)->nhi->dev, fmt, ## arg)
> +#define tb_warn(tb, fmt, arg...) dev_warn((tb)->nhi->dev, fmt, ## arg)
> +#define tb_info(tb, fmt, arg...) dev_info((tb)->nhi->dev, fmt, ## arg)
> +#define tb_dbg(tb, fmt, arg...) dev_dbg((tb)->nhi->dev, fmt, ## arg)
>  
>  #define __TB_SW_PRINT(level, sw, fmt, arg...)           \
>  	do {                                            \
> diff --git a/drivers/thunderbolt/usb4_port.c b/drivers/thunderbolt/usb4_port.c
> index c32d3516e780..890de530debc 100644
> --- a/drivers/thunderbolt/usb4_port.c
> +++ b/drivers/thunderbolt/usb4_port.c
> @@ -138,7 +138,7 @@ bool usb4_usb3_port_match(struct device *usb4_port_dev,
>  		return false;
>  
>  	/* Check if USB3 fwnode references same NHI where USB4 port resides */
> -	if (!device_match_fwnode(&nhi->pdev->dev, nhi_fwnode))
> +	if (!device_match_fwnode(nhi->dev, nhi_fwnode))
>  		return false;
>  
>  	if (fwnode_property_read_u8(usb3_port_fwnode, "usb4-port-number", &usb4_port_num))
> diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
> index 0ba112175bb3..789cd7f364e1 100644
> --- a/include/linux/thunderbolt.h
> +++ b/include/linux/thunderbolt.h
> @@ -496,12 +496,11 @@ static inline struct tb_xdomain *tb_service_parent(struct tb_service *svc)
>   */
>  struct tb_nhi {
>  	spinlock_t lock;
> -	struct pci_dev *pdev;
> +	struct device *dev;

Update kernel-doc.

>  	const struct tb_nhi_ops *ops;
>  	void __iomem *iobase;
>  	struct tb_ring **tx_rings;
>  	struct tb_ring **rx_rings;
> -	struct ida msix_ida;

Ditto.

>  	bool going_away;
>  	bool iommu_dma_protection;
>  	struct work_struct interrupt_work;
> @@ -681,7 +680,7 @@ void tb_ring_poll_complete(struct tb_ring *ring);
>   */
>  static inline struct device *tb_ring_dma_device(struct tb_ring *ring)
>  {
> -	return &ring->nhi->pdev->dev;
> +	return ring->nhi->dev;
>  }
>  
>  bool usb4_usb3_port_match(struct device *usb4_port_dev,
> 
> -- 
> 2.54.0

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-04-28 18:49 ` [PATCH v2 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
@ 2026-05-04  6:54   ` Mika Westerberg
  2026-05-12 13:06     ` Konrad Dybcio
  0 siblings, 1 reply; 13+ messages in thread
From: Mika Westerberg @ 2026-05-04  6:54 UTC (permalink / raw)
  To: Konrad Dybcio
  Cc: Andreas Noever, Mika Westerberg, Yehezkel Bernat, linux-kernel,
	linux-usb, usb4-upstream, Raghavendra Thoorpu, Konrad Dybcio

Hi,

On Tue, Apr 28, 2026 at 08:49:45PM +0200, Konrad Dybcio wrote:
> + * @pre_nvm_auth: hook to run before TBT3 NVM authentication

Thunderbolt 3 NVM authentication

> + * @post_nvm_auth: hook to run after TBT3 NVM authentication

ditto here.

> + * @request_ring_irq: NHI specific interrupt retrieval hook
> + * @release_ring_irq: NHI specific interrupt release hook
> + * @is_present: Whether the device is currently present on the parent bus
> + * @init_interrupts: NHI specific interrupt initialization hook
>   */
>  struct tb_nhi_ops {
>  	int (*init)(struct tb_nhi *nhi);
> @@ -47,6 +61,12 @@ struct tb_nhi_ops {
>  	int (*runtime_suspend)(struct tb_nhi *nhi);
>  	int (*runtime_resume)(struct tb_nhi *nhi);
>  	void (*shutdown)(struct tb_nhi *nhi);
> +	void (*pre_nvm_auth)(struct tb_nhi *nhi);
> +	void (*post_nvm_auth)(struct tb_nhi *nhi);
> +	int (*request_ring_irq)(struct tb_ring *ring, bool no_suspend);
> +	void (*release_ring_irq)(struct tb_ring *ring);
> +	bool (*is_present)(struct tb_nhi *nhi);
> +	int (*init_interrupts)(struct tb_nhi *nhi);
>  };
>  
>  extern const struct tb_nhi_ops icl_nhi_ops;
> @@ -101,4 +121,15 @@ extern const struct tb_nhi_ops icl_nhi_ops;
>  
>  #define PCI_CLASS_SERIAL_USB_USB4			0x0c0340
>  
> +/* Host interface quirks */
> +#define QUIRK_AUTO_CLEAR_INT	BIT(0)
> +#define QUIRK_E2E		BIT(1)
> +
> +/*
> + * Minimal number of vectors when we use MSI-X. Two for control channel
> + * Rx/Tx and the rest four are for cross domain DMA paths.
> + */
> +#define MSIX_MIN_VECS		6
> +#define MSIX_MAX_VECS		16
> +
>  #endif
> diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c
> index 8c50066f3411..530337a78322 100644
> --- a/drivers/thunderbolt/nhi_ops.c
> +++ b/drivers/thunderbolt/nhi_ops.c
> @@ -11,6 +11,7 @@
>  
>  #include "nhi.h"
>  #include "nhi_regs.h"
> +#include "pci.h"
>  #include "tb.h"
>  
>  /* Ice Lake specific NHI operations */
> @@ -176,6 +177,8 @@ static int icl_nhi_resume(struct tb_nhi *nhi)
>  
>  static void icl_nhi_shutdown(struct tb_nhi *nhi)
>  {
> +	nhi_pci_shutdown(nhi);
> +
>  	icl_nhi_force_power(nhi, false);
>  }
>  
> @@ -186,4 +189,10 @@ const struct tb_nhi_ops icl_nhi_ops = {
>  	.runtime_suspend = icl_nhi_suspend,
>  	.runtime_resume = icl_nhi_resume,
>  	.shutdown = icl_nhi_shutdown,
> +	.pre_nvm_auth = nhi_pci_start_dma_port,
> +	.post_nvm_auth = nhi_pci_complete_dma_port,
> +	.request_ring_irq = nhi_pci_ring_request_msix,
> +	.release_ring_irq = nhi_pci_ring_release_msix,
> +	.is_present = nhi_pci_is_present,
> +	.init_interrupts = nhi_pci_init_msi,
>  };
> diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
> new file mode 100644
> index 000000000000..400ba88db034
> --- /dev/null
> +++ b/drivers/thunderbolt/pci.c
> @@ -0,0 +1,507 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*
> + * Thunderbolt driver - PCI NHI driver
> + *
> + * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
> + * Copyright (C) 2018, Intel Corporation
> + */
> +
> +#include <linux/pm_runtime.h>
> +#include <linux/slab.h>
> +#include <linux/errno.h>
> +#include <linux/pci.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/interrupt.h>
> +#include <linux/iommu.h>
> +#include <linux/module.h>
> +#include <linux/delay.h>
> +#include <linux/platform_data/x86/apple.h>
> +#include <linux/property.h>
> +#include <linux/string_helpers.h>
> +
> +#include "nhi.h"
> +#include "nhi_regs.h"
> +#include "pci.h"
> +#include "tb.h"
> +

Kernel-doc

> +struct tb_nhi_pci {
> +	struct tb_nhi nhi;
> +	struct ida msix_ida;
> +};
> +
> +static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
> +{
> +	return container_of(nhi, struct tb_nhi_pci, nhi);
> +}
> +
> +static void nhi_pci_check_quirks(struct tb_nhi_pci *nhi_pci)
> +{
> +	struct tb_nhi *nhi = &nhi_pci->nhi;
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +
> +	if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
> +		/*
> +		 * Intel hardware supports auto clear of the interrupt
> +		 * status register right after interrupt is being
> +		 * issued.
> +		 */
> +		nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
> +
> +		switch (pdev->device) {
> +		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
> +		case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
> +			/*
> +			 * Falcon Ridge controller needs the end-to-end
> +			 * flow control workaround to avoid losing Rx
> +			 * packets when RING_FLAG_E2E is set.
> +			 */
> +			nhi->quirks |= QUIRK_E2E;
> +			break;
> +		}
> +	}
> +}
> +
> +static int nhi_pci_check_iommu_pdev(struct pci_dev *pdev, void *data)
> +{
> +	if (!pdev->external_facing ||
> +	    !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
> +		return 0;
> +	*(bool *)data = true;
> +	return 1; /* Stop walking */
> +}
> +
> +static void nhi_pci_check_iommu(struct tb_nhi_pci *nhi_pci)
> +{
> +	struct tb_nhi *nhi = &nhi_pci->nhi;
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +	struct pci_bus *bus = pdev->bus;
> +	bool port_ok = false;
> +
> +	/*
> +	 * Ideally what we'd do here is grab every PCI device that
> +	 * represents a tunnelling adapter for this NHI and check their
> +	 * status directly, but unfortunately USB4 seems to make it
> +	 * obnoxiously difficult to reliably make any correlation.
> +	 *
> +	 * So for now we'll have to bodge it... Hoping that the system
> +	 * is at least sane enough that an adapter is in the same PCI
> +	 * segment as its NHI, if we can find *something* on that segment
> +	 * which meets the requirements for Kernel DMA Protection, we'll
> +	 * take that to imply that firmware is aware and has (hopefully)
> +	 * done the right thing in general. We need to know that the PCI
> +	 * layer has seen the ExternalFacingPort property which will then
> +	 * inform the IOMMU layer to enforce the complete "untrusted DMA"
> +	 * flow, but also that the IOMMU driver itself can be trusted not
> +	 * to have been subverted by a pre-boot DMA attack.
> +	 */
> +	while (bus->parent)
> +		bus = bus->parent;
> +
> +	pci_walk_bus(bus, nhi_pci_check_iommu_pdev, &port_ok);
> +
> +	nhi->iommu_dma_protection = port_ok;
> +	dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
> +		str_enabled_disabled(port_ok));
> +}
> +
> +int nhi_pci_init_msi(struct tb_nhi *nhi)
> +{
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +	struct device *dev = &pdev->dev;
> +	int res, irq, nvec;
> +
> +	ida_init(&nhi_pci->msix_ida);
> +
> +	/*
> +	 * The NHI has 16 MSI-X vectors or a single MSI. We first try to
> +	 * get all MSI-X vectors and if we succeed, each ring will have
> +	 * one MSI-X. If for some reason that does not work out, we
> +	 * fallback to a single MSI.
> +	 */
> +	nvec = pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS,
> +				     PCI_IRQ_MSIX);
> +	if (nvec < 0) {
> +		nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
> +		if (nvec < 0)
> +			return nvec;
> +
> +		INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
> +
> +		irq = pci_irq_vector(pdev, 0);
> +		if (irq < 0)
> +			return irq;
> +
> +		res = devm_request_irq(&pdev->dev, irq, nhi_msi,
> +				       IRQF_NO_SUSPEND, "thunderbolt", nhi);
> +		if (res)
> +			return dev_err_probe(dev, res, "request_irq failed, aborting\n");
> +	}
> +
> +	return 0;
> +}
> +
> +static bool nhi_pci_imr_valid(struct pci_dev *pdev)
> +{
> +	u8 val;
> +
> +	if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val))
> +		return !!val;
> +
> +	return true;
> +}
> +
> +void nhi_pci_start_dma_port(struct tb_nhi *nhi)
> +{
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +	struct pci_dev *root_port;
> +
> +	/*
> +	 * During host router NVM upgrade we should not allow root port to
> +	 * go into D3cold because some root ports cannot trigger PME
> +	 * itself. To be on the safe side keep the root port in D0 during
> +	 * the whole upgrade process.
> +	 */
> +	root_port = pcie_find_root_port(pdev);
> +	if (root_port)
> +		pm_runtime_get_noresume(&root_port->dev);
> +}
> +
> +void nhi_pci_complete_dma_port(struct tb_nhi *nhi)
> +{
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +	struct pci_dev *root_port;
> +
> +	root_port = pcie_find_root_port(pdev);
> +	if (root_port)
> +		pm_runtime_put(&root_port->dev);
> +}
> +
> +int nhi_pci_ring_request_msix(struct tb_ring *ring, bool no_suspend)
> +{
> +	struct tb_nhi *nhi = ring->nhi;
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +	unsigned long irqflags;
> +	int ret;
> +
> +	if (!pdev->msix_enabled)
> +		return 0;
> +
> +	ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
> +	if (ret < 0)
> +		return ret;
> +
> +	ring->vector = ret;
> +
> +	ret = pci_irq_vector(pdev, ring->vector);
> +	if (ret < 0)
> +		goto err_ida_remove;
> +
> +	ring->irq = ret;
> +
> +	irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
> +	ret = request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
> +	if (ret)
> +		goto err_ida_remove;
> +
> +	return 0;
> +
> +err_ida_remove:
> +	ida_free(&nhi_pci->msix_ida, ring->vector);
> +
> +	return ret;
> +}
> +
> +void nhi_pci_ring_release_msix(struct tb_ring *ring)
> +{
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
> +
> +	if (ring->irq <= 0)
> +		return;
> +
> +	free_irq(ring->irq, ring);
> +	ida_free(&nhi_pci->msix_ida, ring->vector);
> +	ring->vector = 0;
> +	ring->irq = 0;
> +}
> +
> +void nhi_pci_shutdown(struct tb_nhi *nhi)

Why these are not static?

> +{
> +	struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
> +	struct pci_dev *pdev = to_pci_dev(nhi->dev);
> +
> +	/*
> +	 * We have to release the irq before calling flush_work. Otherwise an
> +	 * already executing IRQ handler could call schedule_work again.
> +	 */
> +	if (!pdev->msix_enabled) {
> +		devm_free_irq(nhi->dev, pdev->irq, nhi);
> +		flush_work(&nhi->interrupt_work);
> +	}
> +	ida_destroy(&nhi_pci->msix_ida);
> +}
> +
> +bool nhi_pci_is_present(struct tb_nhi *nhi)
> +{
> +	return pci_device_is_present(to_pci_dev(nhi->dev));
> +}
> +
> +static const struct tb_nhi_ops pci_nhi_default_ops = {
> +	.pre_nvm_auth = nhi_pci_start_dma_port,
> +	.post_nvm_auth = nhi_pci_complete_dma_port,
> +	.request_ring_irq = nhi_pci_ring_request_msix,
> +	.release_ring_irq = nhi_pci_ring_release_msix,
> +	.shutdown = nhi_pci_shutdown,
> +	.is_present = nhi_pci_is_present,
> +	.init_interrupts = nhi_pci_init_msi,

You populate them here so there is no need to expose outside of pci.c.

> +};
> +
> +static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> +	struct device *dev = &pdev->dev;
> +	struct tb_nhi_pci *nhi_pci;
> +	struct tb_nhi *nhi;
> +	int res;
> +
> +	if (!nhi_pci_imr_valid(pdev))
> +		return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n");
> +
> +	res = pcim_enable_device(pdev);
> +	if (res)
> +		return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
> +
> +	nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
> +	if (!nhi_pci)
> +		return -ENOMEM;
> +
> +	nhi = &nhi_pci->nhi;
> +	nhi->dev = dev;
> +	nhi->ops = (const struct tb_nhi_ops *)id->driver_data ?: &pci_nhi_default_ops;
> +
> +	nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
> +	res = PTR_ERR_OR_ZERO(nhi->iobase);
> +	if (res)
> +		return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n");
> +
> +	nhi_pci_check_quirks(nhi_pci);
> +	nhi_pci_check_iommu(nhi_pci);
> +
> +	pci_set_master(pdev);
> +
> +	res = nhi_probe(&nhi_pci->nhi);
> +	if (res)
> +		return dev_err_probe(dev, res, "NHI common probe failed\n");

You can make nhi_probe() log error so you can just do here

	return nhi_probe(...);

> +
> +	return 0;
> +}
> +
> +static void nhi_pci_remove(struct pci_dev *pdev)
> +{
> +	struct tb *tb = pci_get_drvdata(pdev);
> +	struct tb_nhi *nhi = tb->nhi;
> +
> +	pm_runtime_get_sync(&pdev->dev);
> +	pm_runtime_dont_use_autosuspend(&pdev->dev);
> +	pm_runtime_forbid(&pdev->dev);
> +
> +	tb_domain_remove(tb);
> +	nhi_shutdown(nhi);
> +}
> +
> +/*
> + * During suspend the Thunderbolt controller is reset and all PCIe
> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
> + * during resume. This adds device links between the tunneled PCIe
> + * downstream ports and the NHI so that the device core will make sure
> + * NHI is resumed first before the rest.
> + */
> +bool tb_apple_add_links(struct tb_nhi *nhi)

Okay you moved it here good. I think we can call it in nhi_pci_probe()
directly so no need to expose outside.

> +{
> +	struct pci_dev *nhi_pdev = to_pci_dev(nhi->dev);
> +	struct pci_dev *upstream, *pdev;
> +	bool ret;
> +
> +	if (!x86_apple_machine)
> +		return false;
> +
> +	switch (nhi_pdev->device) {
> +	case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
> +	case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
> +	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
> +	case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
> +		break;
> +	default:
> +		return false;
> +	}
> +
> +	upstream = pci_upstream_bridge(nhi_pdev);
> +	while (upstream) {
> +		if (!pci_is_pcie(upstream))
> +			return false;
> +		if (pci_pcie_type(upstream) == PCI_EXP_TYPE_UPSTREAM)
> +			break;
> +		upstream = pci_upstream_bridge(upstream);
> +	}
> +
> +	if (!upstream)
> +		return false;
> +
> +	/*
> +	 * For each hotplug downstream port, create add device link
> +	 * back to NHI so that PCIe tunnels can be re-established after
> +	 * sleep.
> +	 */
> +	ret = false;
> +	for_each_pci_bridge(pdev, upstream->subordinate) {
> +		const struct device_link *link;
> +
> +		if (!pci_is_pcie(pdev))
> +			continue;
> +		if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
> +		    !pdev->is_pciehp)
> +			continue;
> +
> +		link = device_link_add(&pdev->dev, nhi->dev,
> +				       DL_FLAG_AUTOREMOVE_SUPPLIER |
> +				       DL_FLAG_PM_RUNTIME);
> +		if (link) {
> +			dev_dbg(nhi->dev, "created link from %s\n",
> +				dev_name(&pdev->dev));
> +			ret = true;
> +		} else {
> +			dev_warn(nhi->dev, "device link creation from %s failed\n",
> +				 dev_name(&pdev->dev));
> +		}
> +	}
> +
> +	return ret;
> +}
> +
> +static struct pci_device_id nhi_ids[] = {
> +	/*
> +	 * We have to specify class, the TB bridges use the same device and
> +	 * vendor (sub)id on gen 1 and gen 2 controllers.
> +	 */
> +	{
> +		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
> +		.vendor = PCI_VENDOR_ID_INTEL,
> +		.device = PCI_DEVICE_ID_INTEL_LIGHT_RIDGE,
> +		.subvendor = 0x2222, .subdevice = 0x1111,
> +	},
> +	{
> +		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
> +		.vendor = PCI_VENDOR_ID_INTEL,
> +		.device = PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C,
> +		.subvendor = 0x2222, .subdevice = 0x1111,
> +	},
> +	{
> +		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
> +		.vendor = PCI_VENDOR_ID_INTEL,
> +		.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI,
> +		.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
> +	},
> +	{
> +		.class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
> +		.vendor = PCI_VENDOR_ID_INTEL,
> +		.device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI,
> +		.subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
> +	},
> +
> +	/* Thunderbolt 3 */
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	/* Thunderbolt 4 */
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0),
> +	  .driver_data = (kernel_ulong_t)&icl_nhi_ops },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
> +	{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
> +
> +	/* Any USB4 compliant host */
> +	{ PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
> +
> +	{ 0,}
> +};
> +
> +MODULE_DEVICE_TABLE(pci, nhi_ids);
> +MODULE_DESCRIPTION("Thunderbolt/USB4 core driver");
> +MODULE_LICENSE("GPL");
> +
> +static struct pci_driver nhi_driver = {
> +	.name = "thunderbolt",
> +	.id_table = nhi_ids,
> +	.probe = nhi_pci_probe,
> +	.remove = nhi_pci_remove,
> +	.shutdown = nhi_pci_remove,
> +	.driver.pm = &nhi_pm_ops,
> +};
> +
> +static int __init nhi_init(void)
> +{
> +	int ret;
> +
> +	ret = tb_domain_init();
> +	if (ret)
> +		return ret;
> +
> +	ret = pci_register_driver(&nhi_driver);
> +	if (ret)
> +		tb_domain_exit();
> +
> +	return ret;
> +}
> +
> +static void __exit nhi_unload(void)
> +{
> +	pci_unregister_driver(&nhi_driver);
> +	tb_domain_exit();
> +}
> +
> +rootfs_initcall(nhi_init);
> +module_exit(nhi_unload);
> diff --git a/drivers/thunderbolt/pci.h b/drivers/thunderbolt/pci.h
> new file mode 100644
> index 000000000000..8ce272a10661
> --- /dev/null
> +++ b/drivers/thunderbolt/pci.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
> + */
> +
> +#ifndef __TBT_PCI_H
> +#define __TBT_PCI_H

__TB_PCI_H

> +
> +#include <linux/types.h>
> +
> +void nhi_pci_start_dma_port(struct tb_nhi *nhi);
> +void nhi_pci_complete_dma_port(struct tb_nhi *nhi);
> +int nhi_pci_ring_request_msix(struct tb_ring *ring, bool no_suspend);
> +void nhi_pci_ring_release_msix(struct tb_ring *ring);
> +bool nhi_pci_is_present(struct tb_nhi *nhi);
> +void nhi_pci_shutdown(struct tb_nhi *nhi);
> +int nhi_pci_init_msi(struct tb_nhi *nhi);

And sinse these are already callbacks no need to expose in this header.

> +
> +#endif
> diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
> index 0680209e349c..9647650ee02d 100644
> --- a/drivers/thunderbolt/switch.c
> +++ b/drivers/thunderbolt/switch.c
> @@ -209,32 +209,6 @@ static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
>  	return -ETIMEDOUT;
>  }
>  
> -static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
> -{
> -	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
> -	struct pci_dev *root_port;
> -
> -	/*
> -	 * During host router NVM upgrade we should not allow root port to
> -	 * go into D3cold because some root ports cannot trigger PME
> -	 * itself. To be on the safe side keep the root port in D0 during
> -	 * the whole upgrade process.
> -	 */
> -	root_port = pcie_find_root_port(pdev);
> -	if (root_port)
> -		pm_runtime_get_noresume(&root_port->dev);
> -}
> -
> -static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
> -{
> -	struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
> -	struct pci_dev *root_port;
> -
> -	root_port = pcie_find_root_port(pdev);
> -	if (root_port)
> -		pm_runtime_put(&root_port->dev);
> -}
> -
>  static inline bool nvm_readable(struct tb_switch *sw)
>  {
>  	if (tb_switch_is_usb4(sw)) {
> @@ -260,6 +234,7 @@ static inline bool nvm_upgradeable(struct tb_switch *sw)
>  
>  static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
>  {
> +	struct tb_nhi *nhi = sw->tb->nhi;
>  	int ret;
>  
>  	if (tb_switch_is_usb4(sw)) {
> @@ -276,7 +251,8 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
>  
>  	sw->nvm->authenticating = true;
>  	if (!tb_route(sw)) {
> -		nvm_authenticate_start_dma_port(sw);
> +		if (nhi->ops && nhi->ops->pre_nvm_auth)
> +			nhi->ops->pre_nvm_auth(nhi);
>  		ret = nvm_authenticate_host_dma_port(sw);
>  	} else {
>  		ret = nvm_authenticate_device_dma_port(sw);
> @@ -2745,6 +2721,7 @@ static int tb_switch_set_uuid(struct tb_switch *sw)
>  
>  static int tb_switch_add_dma_port(struct tb_switch *sw)
>  {
> +	struct tb_nhi *nhi = sw->tb->nhi;
>  	u32 status;
>  	int ret;
>  
> @@ -2804,8 +2781,10 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
>  	 */
>  	nvm_get_auth_status(sw, &status);
>  	if (status) {
> -		if (!tb_route(sw))
> -			nvm_authenticate_complete_dma_port(sw);
> +		if (!tb_route(sw)) {
> +			if (nhi->ops && nhi->ops->post_nvm_auth)
> +				nhi->ops->post_nvm_auth(nhi);
> +		}
>  		return 0;
>  	}
>  
> @@ -2819,8 +2798,10 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
>  		return ret;
>  
>  	/* Now we can allow root port to suspend again */
> -	if (!tb_route(sw))
> -		nvm_authenticate_complete_dma_port(sw);
> +	if (!tb_route(sw)) {
> +		if (nhi->ops && nhi->ops->post_nvm_auth)
> +			nhi->ops->post_nvm_auth(nhi);
> +	}
>  
>  	if (status) {
>  		tb_sw_info(sw, "switch flash authentication failed\n");
> 
> -- 
> 2.54.0

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-05-04  6:54   ` Mika Westerberg
@ 2026-05-12 13:06     ` Konrad Dybcio
  2026-05-12 13:20       ` Mika Westerberg
  0 siblings, 1 reply; 13+ messages in thread
From: Konrad Dybcio @ 2026-05-12 13:06 UTC (permalink / raw)
  To: Mika Westerberg, Konrad Dybcio
  Cc: Andreas Noever, Mika Westerberg, Yehezkel Bernat, linux-kernel,
	linux-usb, usb4-upstream, Raghavendra Thoorpu

On 5/4/26 8:54 AM, Mika Westerberg wrote:
> Hi,

[...]

>> +void nhi_pci_shutdown(struct tb_nhi *nhi)
> 
> Why these are not static?

[...]

>> +static const struct tb_nhi_ops pci_nhi_default_ops = {
>> +	.pre_nvm_auth = nhi_pci_start_dma_port,
>> +	.post_nvm_auth = nhi_pci_complete_dma_port,
>> +	.request_ring_irq = nhi_pci_ring_request_msix,
>> +	.release_ring_irq = nhi_pci_ring_release_msix,
>> +	.shutdown = nhi_pci_shutdown,
>> +	.is_present = nhi_pci_is_present,
>> +	.init_interrupts = nhi_pci_init_msi,
> 
> You populate them here so there is no need to expose outside of pci.c.

nhi_ops.c needs them too, as they were previously called
unconditionally for all NHI flavors

[...]


>> +/*
>> + * During suspend the Thunderbolt controller is reset and all PCIe
>> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
>> + * during resume. This adds device links between the tunneled PCIe
>> + * downstream ports and the NHI so that the device core will make sure
>> + * NHI is resumed first before the rest.
>> + */
>> +bool tb_apple_add_links(struct tb_nhi *nhi)
> 
> Okay you moved it here good. I think we can call it in nhi_pci_probe()
> directly so no need to expose outside.

Yeah that seems like a good idea. It's already there, behind N calls
in the software CM case.

Do we have to check the CM type though, or do you think it'd be fine
to just call it unconditionally? (either because there are presumably
no Apple machines with ICM or because these devlinks would be harmless?)

(ack for all the remaining comments)

Konrad

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-05-12 13:06     ` Konrad Dybcio
@ 2026-05-12 13:20       ` Mika Westerberg
  2026-05-12 13:43         ` Konrad Dybcio
  0 siblings, 1 reply; 13+ messages in thread
From: Mika Westerberg @ 2026-05-12 13:20 UTC (permalink / raw)
  To: Konrad Dybcio
  Cc: Konrad Dybcio, Andreas Noever, Mika Westerberg, Yehezkel Bernat,
	linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu

On Tue, May 12, 2026 at 03:06:58PM +0200, Konrad Dybcio wrote:
> On 5/4/26 8:54 AM, Mika Westerberg wrote:
> > Hi,
> 
> [...]
> 
> >> +void nhi_pci_shutdown(struct tb_nhi *nhi)
> > 
> > Why these are not static?
> 
> [...]
> 
> >> +static const struct tb_nhi_ops pci_nhi_default_ops = {
> >> +	.pre_nvm_auth = nhi_pci_start_dma_port,
> >> +	.post_nvm_auth = nhi_pci_complete_dma_port,
> >> +	.request_ring_irq = nhi_pci_ring_request_msix,
> >> +	.release_ring_irq = nhi_pci_ring_release_msix,
> >> +	.shutdown = nhi_pci_shutdown,
> >> +	.is_present = nhi_pci_is_present,
> >> +	.init_interrupts = nhi_pci_init_msi,
> > 
> > You populate them here so there is no need to expose outside of pci.c.
> 
> nhi_ops.c needs them too, as they were previously called
> unconditionally for all NHI flavors

OK.

> [...]
> 
> 
> >> +/*
> >> + * During suspend the Thunderbolt controller is reset and all PCIe
> >> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
> >> + * during resume. This adds device links between the tunneled PCIe
> >> + * downstream ports and the NHI so that the device core will make sure
> >> + * NHI is resumed first before the rest.
> >> + */
> >> +bool tb_apple_add_links(struct tb_nhi *nhi)
> > 
> > Okay you moved it here good. I think we can call it in nhi_pci_probe()
> > directly so no need to expose outside.
> 
> Yeah that seems like a good idea. It's already there, behind N calls
> in the software CM case.
> 
> Do we have to check the CM type though, or do you think it'd be fine
> to just call it unconditionally? (either because there are presumably
> no Apple machines with ICM or because these devlinks would be harmless?)

I think you can call it unconditionally. It only does something for TB1-2
Apple systems.

For Apple TB3 we used to start ICM firmware but this was changed as the
driver learned SW CM. However, we never setup any device links so this
would not change anything.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-05-12 13:20       ` Mika Westerberg
@ 2026-05-12 13:43         ` Konrad Dybcio
  2026-05-12 13:54           ` Mika Westerberg
  0 siblings, 1 reply; 13+ messages in thread
From: Konrad Dybcio @ 2026-05-12 13:43 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Konrad Dybcio, Andreas Noever, Mika Westerberg, Yehezkel Bernat,
	linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu

On 5/12/26 3:20 PM, Mika Westerberg wrote:
> On Tue, May 12, 2026 at 03:06:58PM +0200, Konrad Dybcio wrote:
>> On 5/4/26 8:54 AM, Mika Westerberg wrote:
>>> Hi,

[...]

>>>> +/*
>>>> + * During suspend the Thunderbolt controller is reset and all PCIe
>>>> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
>>>> + * during resume. This adds device links between the tunneled PCIe
>>>> + * downstream ports and the NHI so that the device core will make sure
>>>> + * NHI is resumed first before the rest.
>>>> + */
>>>> +bool tb_apple_add_links(struct tb_nhi *nhi)
>>>
>>> Okay you moved it here good. I think we can call it in nhi_pci_probe()
>>> directly so no need to expose outside.
>>
>> Yeah that seems like a good idea. It's already there, behind N calls
>> in the software CM case.
>>
>> Do we have to check the CM type though, or do you think it'd be fine
>> to just call it unconditionally? (either because there are presumably
>> no Apple machines with ICM or because these devlinks would be harmless?)
> 
> I think you can call it unconditionally. It only does something for TB1-2
> Apple systems.
> 
> For Apple TB3 we used to start ICM firmware but this was changed as the
> driver learned SW CM. However, we never setup any device links so this
> would not change anything.

OK. I'm keeping tb_acpi_add_link() as-is, since that's both bus- and
arch-independent.

However, doing just something like:

diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index cb5d028de3bc..f5ddc8ddb8bb 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -3327,7 +3327,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
         * before the PCIe/USB stack is resumed so complain here if we
         * found them missing.
         */
-       if (!tb_apple_add_links(nhi) && !tb_acpi_add_links(nhi))
+       if (!tb_acpi_add_links(nhi))
                tb_warn(tb, "device links to tunneled native ports are missing!\n");


diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
index ca50e3584cac..e0abd1d503c5 100644
--- a/drivers/thunderbolt/pci.c
+++ b/drivers/thunderbolt/pci.c
@@ -294,6 +294,8 @@ static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 
        pci_set_master(pdev);
 
+       tb_apple_add_links(nhi)
+
        return nhi_probe(&nhi_pci->nhi);
 }


Will cause the warning to show up. And adding something like
`nhi->device_links_done` is a little ugly.. Ideas?

Konrad

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-05-12 13:43         ` Konrad Dybcio
@ 2026-05-12 13:54           ` Mika Westerberg
  2026-05-12 13:58             ` Konrad Dybcio
  0 siblings, 1 reply; 13+ messages in thread
From: Mika Westerberg @ 2026-05-12 13:54 UTC (permalink / raw)
  To: Konrad Dybcio
  Cc: Konrad Dybcio, Andreas Noever, Mika Westerberg, Yehezkel Bernat,
	linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu

On Tue, May 12, 2026 at 03:43:12PM +0200, Konrad Dybcio wrote:
> On 5/12/26 3:20 PM, Mika Westerberg wrote:
> > On Tue, May 12, 2026 at 03:06:58PM +0200, Konrad Dybcio wrote:
> >> On 5/4/26 8:54 AM, Mika Westerberg wrote:
> >>> Hi,
> 
> [...]
> 
> >>>> +/*
> >>>> + * During suspend the Thunderbolt controller is reset and all PCIe
> >>>> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
> >>>> + * during resume. This adds device links between the tunneled PCIe
> >>>> + * downstream ports and the NHI so that the device core will make sure
> >>>> + * NHI is resumed first before the rest.
> >>>> + */
> >>>> +bool tb_apple_add_links(struct tb_nhi *nhi)
> >>>
> >>> Okay you moved it here good. I think we can call it in nhi_pci_probe()
> >>> directly so no need to expose outside.
> >>
> >> Yeah that seems like a good idea. It's already there, behind N calls
> >> in the software CM case.
> >>
> >> Do we have to check the CM type though, or do you think it'd be fine
> >> to just call it unconditionally? (either because there are presumably
> >> no Apple machines with ICM or because these devlinks would be harmless?)
> > 
> > I think you can call it unconditionally. It only does something for TB1-2
> > Apple systems.
> > 
> > For Apple TB3 we used to start ICM firmware but this was changed as the
> > driver learned SW CM. However, we never setup any device links so this
> > would not change anything.
> 
> OK. I'm keeping tb_acpi_add_link() as-is, since that's both bus- and
> arch-independent.
> 
> However, doing just something like:
> 
> diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
> index cb5d028de3bc..f5ddc8ddb8bb 100644
> --- a/drivers/thunderbolt/tb.c
> +++ b/drivers/thunderbolt/tb.c
> @@ -3327,7 +3327,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
>          * before the PCIe/USB stack is resumed so complain here if we
>          * found them missing.
>          */
> -       if (!tb_apple_add_links(nhi) && !tb_acpi_add_links(nhi))
> +       if (!tb_acpi_add_links(nhi))
>                 tb_warn(tb, "device links to tunneled native ports are missing!\n");
> 
> 
> diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
> index ca50e3584cac..e0abd1d503c5 100644
> --- a/drivers/thunderbolt/pci.c
> +++ b/drivers/thunderbolt/pci.c
> @@ -294,6 +294,8 @@ static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>  
>         pci_set_master(pdev);
>  
> +       tb_apple_add_links(nhi)
> +
>         return nhi_probe(&nhi_pci->nhi);
>  }
> 
> 
> Will cause the warning to show up. And adding something like
> `nhi->device_links_done` is a little ugly.. Ideas?

Ah in Qualcomm case? We are going to add tb_of_add_links() as well, right (or
something along thoese lines)? Then tb.c does:

       if (!tb_acpi_add_links(nhi) && !tb_of_add_links(nhi))
                 tb_warn(tb, "device links to tunneled native ports are missing!\n");

In the meantime it is okay to have that warn because we really do want to
have those links in place :)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-05-12 13:54           ` Mika Westerberg
@ 2026-05-12 13:58             ` Konrad Dybcio
  2026-05-12 14:02               ` Mika Westerberg
  0 siblings, 1 reply; 13+ messages in thread
From: Konrad Dybcio @ 2026-05-12 13:58 UTC (permalink / raw)
  To: Mika Westerberg
  Cc: Konrad Dybcio, Andreas Noever, Mika Westerberg, Yehezkel Bernat,
	linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu

On 5/12/26 3:54 PM, Mika Westerberg wrote:
> On Tue, May 12, 2026 at 03:43:12PM +0200, Konrad Dybcio wrote:
>> On 5/12/26 3:20 PM, Mika Westerberg wrote:
>>> On Tue, May 12, 2026 at 03:06:58PM +0200, Konrad Dybcio wrote:
>>>> On 5/4/26 8:54 AM, Mika Westerberg wrote:
>>>>> Hi,
>>
>> [...]
>>
>>>>>> +/*
>>>>>> + * During suspend the Thunderbolt controller is reset and all PCIe
>>>>>> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
>>>>>> + * during resume. This adds device links between the tunneled PCIe
>>>>>> + * downstream ports and the NHI so that the device core will make sure
>>>>>> + * NHI is resumed first before the rest.
>>>>>> + */
>>>>>> +bool tb_apple_add_links(struct tb_nhi *nhi)
>>>>>
>>>>> Okay you moved it here good. I think we can call it in nhi_pci_probe()
>>>>> directly so no need to expose outside.
>>>>
>>>> Yeah that seems like a good idea. It's already there, behind N calls
>>>> in the software CM case.
>>>>
>>>> Do we have to check the CM type though, or do you think it'd be fine
>>>> to just call it unconditionally? (either because there are presumably
>>>> no Apple machines with ICM or because these devlinks would be harmless?)
>>>
>>> I think you can call it unconditionally. It only does something for TB1-2
>>> Apple systems.
>>>
>>> For Apple TB3 we used to start ICM firmware but this was changed as the
>>> driver learned SW CM. However, we never setup any device links so this
>>> would not change anything.
>>
>> OK. I'm keeping tb_acpi_add_link() as-is, since that's both bus- and
>> arch-independent.
>>
>> However, doing just something like:
>>
>> diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
>> index cb5d028de3bc..f5ddc8ddb8bb 100644
>> --- a/drivers/thunderbolt/tb.c
>> +++ b/drivers/thunderbolt/tb.c
>> @@ -3327,7 +3327,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
>>          * before the PCIe/USB stack is resumed so complain here if we
>>          * found them missing.
>>          */
>> -       if (!tb_apple_add_links(nhi) && !tb_acpi_add_links(nhi))
>> +       if (!tb_acpi_add_links(nhi))
>>                 tb_warn(tb, "device links to tunneled native ports are missing!\n");
>>
>>
>> diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
>> index ca50e3584cac..e0abd1d503c5 100644
>> --- a/drivers/thunderbolt/pci.c
>> +++ b/drivers/thunderbolt/pci.c
>> @@ -294,6 +294,8 @@ static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>>  
>>         pci_set_master(pdev);
>>  
>> +       tb_apple_add_links(nhi)
>> +
>>         return nhi_probe(&nhi_pci->nhi);
>>  }
>>
>>
>> Will cause the warning to show up. And adding something like
>> `nhi->device_links_done` is a little ugly.. Ideas?
> 
> Ah in Qualcomm case? We are going to add tb_of_add_links() as well, right (or
> something along thoese lines)? Then tb.c does:
> 
>        if (!tb_acpi_add_links(nhi) && !tb_of_add_links(nhi))
>                  tb_warn(tb, "device links to tunneled native ports are missing!\n");
> 
> In the meantime it is okay to have that warn because we really do want to
> have those links in place :)

No no, I meant that with the diff above, tb_apple_add_links() failing
would not lead to any warning messages, and it would always hit the
warning in tb.c

(because these two are now checked independently)

Konrad

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/4] thunderbolt: Separate out common NHI bits
  2026-05-12 13:58             ` Konrad Dybcio
@ 2026-05-12 14:02               ` Mika Westerberg
  0 siblings, 0 replies; 13+ messages in thread
From: Mika Westerberg @ 2026-05-12 14:02 UTC (permalink / raw)
  To: Konrad Dybcio
  Cc: Konrad Dybcio, Andreas Noever, Mika Westerberg, Yehezkel Bernat,
	linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu

On Tue, May 12, 2026 at 03:58:34PM +0200, Konrad Dybcio wrote:
> On 5/12/26 3:54 PM, Mika Westerberg wrote:
> > On Tue, May 12, 2026 at 03:43:12PM +0200, Konrad Dybcio wrote:
> >> On 5/12/26 3:20 PM, Mika Westerberg wrote:
> >>> On Tue, May 12, 2026 at 03:06:58PM +0200, Konrad Dybcio wrote:
> >>>> On 5/4/26 8:54 AM, Mika Westerberg wrote:
> >>>>> Hi,
> >>
> >> [...]
> >>
> >>>>>> +/*
> >>>>>> + * During suspend the Thunderbolt controller is reset and all PCIe
> >>>>>> + * tunnels are lost. The NHI driver will try to reestablish all tunnels
> >>>>>> + * during resume. This adds device links between the tunneled PCIe
> >>>>>> + * downstream ports and the NHI so that the device core will make sure
> >>>>>> + * NHI is resumed first before the rest.
> >>>>>> + */
> >>>>>> +bool tb_apple_add_links(struct tb_nhi *nhi)
> >>>>>
> >>>>> Okay you moved it here good. I think we can call it in nhi_pci_probe()
> >>>>> directly so no need to expose outside.
> >>>>
> >>>> Yeah that seems like a good idea. It's already there, behind N calls
> >>>> in the software CM case.
> >>>>
> >>>> Do we have to check the CM type though, or do you think it'd be fine
> >>>> to just call it unconditionally? (either because there are presumably
> >>>> no Apple machines with ICM or because these devlinks would be harmless?)
> >>>
> >>> I think you can call it unconditionally. It only does something for TB1-2
> >>> Apple systems.
> >>>
> >>> For Apple TB3 we used to start ICM firmware but this was changed as the
> >>> driver learned SW CM. However, we never setup any device links so this
> >>> would not change anything.
> >>
> >> OK. I'm keeping tb_acpi_add_link() as-is, since that's both bus- and
> >> arch-independent.
> >>
> >> However, doing just something like:
> >>
> >> diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
> >> index cb5d028de3bc..f5ddc8ddb8bb 100644
> >> --- a/drivers/thunderbolt/tb.c
> >> +++ b/drivers/thunderbolt/tb.c
> >> @@ -3327,7 +3327,7 @@ struct tb *tb_probe(struct tb_nhi *nhi)
> >>          * before the PCIe/USB stack is resumed so complain here if we
> >>          * found them missing.
> >>          */
> >> -       if (!tb_apple_add_links(nhi) && !tb_acpi_add_links(nhi))
> >> +       if (!tb_acpi_add_links(nhi))
> >>                 tb_warn(tb, "device links to tunneled native ports are missing!\n");
> >>
> >>
> >> diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
> >> index ca50e3584cac..e0abd1d503c5 100644
> >> --- a/drivers/thunderbolt/pci.c
> >> +++ b/drivers/thunderbolt/pci.c
> >> @@ -294,6 +294,8 @@ static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> >>  
> >>         pci_set_master(pdev);
> >>  
> >> +       tb_apple_add_links(nhi)
> >> +
> >>         return nhi_probe(&nhi_pci->nhi);
> >>  }
> >>
> >>
> >> Will cause the warning to show up. And adding something like
> >> `nhi->device_links_done` is a little ugly.. Ideas?
> > 
> > Ah in Qualcomm case? We are going to add tb_of_add_links() as well, right (or
> > something along thoese lines)? Then tb.c does:
> > 
> >        if (!tb_acpi_add_links(nhi) && !tb_of_add_links(nhi))
> >                  tb_warn(tb, "device links to tunneled native ports are missing!\n");
> > 
> > In the meantime it is okay to have that warn because we really do want to
> > have those links in place :)
> 
> No no, I meant that with the diff above, tb_apple_add_links() failing
> would not lead to any warning messages, and it would always hit the
> warning in tb.c

Got it. Right. Let's then keep it as is for now (e.g don't move it from
tb.c).

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-05-12 14:02 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-28 18:49 [PATCH v2 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
2026-04-28 18:49 ` [PATCH v2 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
2026-05-04  6:40   ` Mika Westerberg
2026-04-28 18:49 ` [PATCH v2 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
2026-05-04  6:54   ` Mika Westerberg
2026-05-12 13:06     ` Konrad Dybcio
2026-05-12 13:20       ` Mika Westerberg
2026-05-12 13:43         ` Konrad Dybcio
2026-05-12 13:54           ` Mika Westerberg
2026-05-12 13:58             ` Konrad Dybcio
2026-05-12 14:02               ` Mika Westerberg
2026-04-28 18:49 ` [PATCH v2 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
2026-04-28 18:49 ` [PATCH v2 4/4] thunderbolt: Add some more descriptive probe error messages Konrad Dybcio

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox