* [PATCH v3 0/4] Prepwork for non-PCIe NHI/TBT hosts
@ 2026-05-13 16:23 Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Konrad Dybcio @ 2026-05-13 16:23 UTC (permalink / raw)
To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
Konrad Dybcio
Currently, the NHI driver (and other parts of the TBT framework) make
multiple assumptions about the host router being a PCIe device. This
series tries to decouple them by moving the 'struct pci_device' out of
the NHI code and introduce NHI-on-PCIe-specific abstractions where
necessary (with no functional change).
The intended usage of the new nhi_probe_common() is pretty similar to
other bus frameworks (I2C, SPI, USB..), i.e.:
static int foo_bar_probe() {
// get SoC-specifc resources (clks, regulators..)
// power things on
// set some implementation-specific registers
// register NHI and all the sub-devices
ret = nhi_probe(&my_usb4->nhi)
...
// cleanup boilerplate
}
Instead of the previously-suggested aux/fauxbus, the NHI device remains
the same 'struct dev' as the PCIe/platform/[...] device that provides
it. This is in line with some other buses and it makes things easier
from the PM perspective.
Tested on:
* Qualcomm X1E80100 CRD (OOT driver)
* USB4 (Qualcomm controller)
* Connected to a TBT3 ASUS ProArt 27 monitor
* Parade PS8830 on-board retimer
Domain 0 Route 0: 0000:0000
Domain 0 Route 2: 0031:9000 ASUS-Display PA27AC
* Intel Coffee Lake NUC (NUC8i3BEK)
* TBT3 (Alpine Ridge 2C 2016 controller)
* Connected to a Dell TB16 dock (TBT active cable)
* S3 + S2idle sleep
Domain 0 Route 0: 8086:6357 Intel Corporation NUC8BEB
Domain 0 Route 1: 00d4:b051 Dell Dell Thunderbolt Cable
Domain 0 Route 301: 00d4:b054 Dell Dell Thunderbolt Dock
* AMD Ryzen 7 PRO 7840U-based Lenovo ThinkPad T14s Gen 4
* USB4 ("Pink Sardine" controller)
* Connected to a Lenovo ThinkPad Thunderbolt 3 Dock
* Parade PS8830 on-board retimer
* Only S2idle is present on this platform
Domain 0 Route 0: 0000:0000
Domain 1 Route 0: 0000:0000
Domain 1 Route 2: 0108:1630 Lenovo ThinkPad Thunderbolt 3 Dock
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
Changes in v3:
- Add missing/update affected kerneldoc
- Apply naming change suggestions
- Back out of moving tb_apple_add_links()
- Drop error log from nhi_pci_probe() calling nhi_probe()
- Unbreak some lines, touch up some change-adjacent whitespace
- Rebase on next-20260508
- Link to v2: https://lore.kernel.org/r/20260428-topic-usb4_nonpcie_prepwork-v2-0-452fb9d63f77@oss.qualcomm.com
Changes in v2:
- Make 'struct tb_nhi_pci' private, strip it of the 'struct pci_dev
field since it can be accessed via to_pci_dev(tb_nhi_pci->nhi.dev)
- Thin out patch 1, move some of its prior contents to patch 2
- Rename nhi_pci.[ch] to pci.[ch]
- Rename nhi_probe_common() to nhi_probe()
- Squash a number of bugs discovered at runtime on x86
- Add a patch to make ops necessary to drop boilerplate checks
- Reword the error messages introduced in the last patch
- Drop RFC/RFT tags
- Link to v1: https://lore.kernel.org/r/20260309-topic-usb4_nonpcie_prepwork-v1-0-d901d85fc794@oss.qualcomm.com
To: Andreas Noever <andreas.noever@gmail.com>
To: Mika Westerberg <westeri@kernel.org>
To: Yehezkel Bernat <YehezkelShB@gmail.com>
Cc: linux-usb@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
Konrad Dybcio (4):
thunderbolt: Move pci_device out of tb_nhi
thunderbolt: Separate out common NHI bits
thunderbolt: Require nhi->ops be valid
thunderbolt: Add some more descriptive probe error messages
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/acpi.c | 14 +-
drivers/thunderbolt/ctl.c | 16 +-
drivers/thunderbolt/domain.c | 2 +-
drivers/thunderbolt/eeprom.c | 2 +-
drivers/thunderbolt/icm.c | 24 +-
drivers/thunderbolt/nhi.c | 525 +++++++---------------------------------
drivers/thunderbolt/nhi.h | 31 +++
drivers/thunderbolt/nhi_ops.c | 35 ++-
drivers/thunderbolt/pci.c | 439 +++++++++++++++++++++++++++++++++
drivers/thunderbolt/pci.h | 19 ++
drivers/thunderbolt/switch.c | 41 +---
drivers/thunderbolt/tb.c | 18 +-
drivers/thunderbolt/tb.h | 10 +-
drivers/thunderbolt/usb4_port.c | 2 +-
include/linux/thunderbolt.h | 8 +-
16 files changed, 667 insertions(+), 521 deletions(-)
---
base-commit: e98d21c170b01ddef366f023bbfcf6b31509fa83
change-id: 20260309-topic-usb4_nonpcie_prepwork-86881f769b8f
Best regards,
--
Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 1/4] thunderbolt: Move pci_device out of tb_nhi
2026-05-13 16:23 [PATCH v3 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
@ 2026-05-13 16:23 ` Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Konrad Dybcio @ 2026-05-13 16:23 UTC (permalink / raw)
To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
Konrad Dybcio
From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Not all USB4/TB implementations are based on a PCIe-attached
controller. In order to make way for these, start off with moving the
pci_device reference out of the main tb_nhi structure.
Encapsulate the existing struct in a new tb_nhi_pci, that shall also
house all properties that relate to the parent bus. Similarly, any
other type of controller will be expected to contain tb_nhi as a
member.
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
drivers/thunderbolt/acpi.c | 14 ++--
drivers/thunderbolt/ctl.c | 16 ++--
drivers/thunderbolt/domain.c | 2 +-
drivers/thunderbolt/eeprom.c | 2 +-
drivers/thunderbolt/icm.c | 24 +++---
drivers/thunderbolt/nhi.c | 158 +++++++++++++++++++++++-----------------
drivers/thunderbolt/nhi_ops.c | 26 ++++---
drivers/thunderbolt/switch.c | 6 +-
drivers/thunderbolt/tb.c | 11 +--
drivers/thunderbolt/tb.h | 10 +--
drivers/thunderbolt/usb4_port.c | 2 +-
include/linux/thunderbolt.h | 8 +-
12 files changed, 155 insertions(+), 124 deletions(-)
diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
index 45d1415871b4..53546bc477a5 100644
--- a/drivers/thunderbolt/acpi.c
+++ b/drivers/thunderbolt/acpi.c
@@ -28,7 +28,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
return AE_OK;
/* It needs to reference this NHI */
- if (dev_fwnode(&nhi->pdev->dev) != fwnode)
+ if (dev_fwnode(nhi->dev) != fwnode)
goto out_put;
/*
@@ -57,16 +57,16 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
*/
pm_runtime_get_sync(&pdev->dev);
- link = device_link_add(&pdev->dev, &nhi->pdev->dev,
+ link = device_link_add(&pdev->dev, nhi->dev,
DL_FLAG_AUTOREMOVE_SUPPLIER |
DL_FLAG_RPM_ACTIVE |
DL_FLAG_PM_RUNTIME);
if (link) {
- dev_dbg(&nhi->pdev->dev, "created link from %s\n",
+ dev_dbg(nhi->dev, "created link from %s\n",
dev_name(&pdev->dev));
*(bool *)ret = true;
} else {
- dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
+ dev_warn(nhi->dev, "device link creation from %s failed\n",
dev_name(&pdev->dev));
}
@@ -93,7 +93,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
acpi_status status;
bool ret = false;
- if (!has_acpi_companion(&nhi->pdev->dev))
+ if (!has_acpi_companion(nhi->dev))
return false;
/*
@@ -103,7 +103,7 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT, 32,
tb_acpi_add_link, NULL, nhi, (void **)&ret);
if (ACPI_FAILURE(status)) {
- dev_warn(&nhi->pdev->dev, "failed to enumerate tunneled ports\n");
+ dev_warn(nhi->dev, "failed to enumerate tunneled ports\n");
return false;
}
@@ -305,7 +305,7 @@ static struct acpi_device *tb_acpi_switch_find_companion(struct tb_switch *sw)
struct tb_nhi *nhi = sw->tb->nhi;
struct acpi_device *parent_adev;
- parent_adev = ACPI_COMPANION(&nhi->pdev->dev);
+ parent_adev = ACPI_COMPANION(nhi->dev);
if (parent_adev)
adev = acpi_find_child_device(parent_adev, 0, false);
}
diff --git a/drivers/thunderbolt/ctl.c b/drivers/thunderbolt/ctl.c
index b2fd60fc7bcc..cd47b627f97b 100644
--- a/drivers/thunderbolt/ctl.c
+++ b/drivers/thunderbolt/ctl.c
@@ -56,22 +56,22 @@ struct tb_ctl {
#define tb_ctl_WARN(ctl, format, arg...) \
- dev_WARN(&(ctl)->nhi->pdev->dev, format, ## arg)
+ dev_WARN((ctl)->nhi->dev, format, ## arg)
#define tb_ctl_err(ctl, format, arg...) \
- dev_err(&(ctl)->nhi->pdev->dev, format, ## arg)
+ dev_err((ctl)->nhi->dev, format, ## arg)
#define tb_ctl_warn(ctl, format, arg...) \
- dev_warn(&(ctl)->nhi->pdev->dev, format, ## arg)
+ dev_warn((ctl)->nhi->dev, format, ## arg)
#define tb_ctl_info(ctl, format, arg...) \
- dev_info(&(ctl)->nhi->pdev->dev, format, ## arg)
+ dev_info((ctl)->nhi->dev, format, ## arg)
#define tb_ctl_dbg(ctl, format, arg...) \
- dev_dbg(&(ctl)->nhi->pdev->dev, format, ## arg)
+ dev_dbg((ctl)->nhi->dev, format, ## arg)
#define tb_ctl_dbg_once(ctl, format, arg...) \
- dev_dbg_once(&(ctl)->nhi->pdev->dev, format, ## arg)
+ dev_dbg_once((ctl)->nhi->dev, format, ## arg)
static DECLARE_WAIT_QUEUE_HEAD(tb_cfg_request_cancel_queue);
/* Serializes access to request kref_get/put */
@@ -666,8 +666,8 @@ struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int index, int timeout_msec,
mutex_init(&ctl->request_queue_lock);
INIT_LIST_HEAD(&ctl->request_queue);
- ctl->frame_pool = dma_pool_create("thunderbolt_ctl", &nhi->pdev->dev,
- TB_FRAME_SIZE, 4, 0);
+ ctl->frame_pool = dma_pool_create("thunderbolt_ctl", nhi->dev,
+ TB_FRAME_SIZE, 4, 0);
if (!ctl->frame_pool)
goto err;
diff --git a/drivers/thunderbolt/domain.c b/drivers/thunderbolt/domain.c
index d83719a37b4c..62dcf24b5f9b 100644
--- a/drivers/thunderbolt/domain.c
+++ b/drivers/thunderbolt/domain.c
@@ -405,7 +405,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize
if (!tb->ctl)
goto err_destroy_wq;
- tb->dev.parent = &nhi->pdev->dev;
+ tb->dev.parent = nhi->dev;
tb->dev.bus = &tb_bus_type;
tb->dev.type = &tb_domain_type;
tb->dev.groups = domain_attr_groups;
diff --git a/drivers/thunderbolt/eeprom.c b/drivers/thunderbolt/eeprom.c
index 5477b9437048..5681c17f82ec 100644
--- a/drivers/thunderbolt/eeprom.c
+++ b/drivers/thunderbolt/eeprom.c
@@ -465,7 +465,7 @@ static void tb_switch_drom_free(struct tb_switch *sw)
*/
static int tb_drom_copy_efi(struct tb_switch *sw, u16 *size)
{
- struct device *dev = &sw->tb->nhi->pdev->dev;
+ struct device *dev = sw->tb->nhi->dev;
int len, res;
len = device_property_count_u8(dev, "ThunderboltDROM");
diff --git a/drivers/thunderbolt/icm.c b/drivers/thunderbolt/icm.c
index c492995166f7..10fefac3b1d9 100644
--- a/drivers/thunderbolt/icm.c
+++ b/drivers/thunderbolt/icm.c
@@ -1466,6 +1466,7 @@ static struct pci_dev *get_upstream_port(struct pci_dev *pdev)
static bool icm_ar_is_supported(struct tb *tb)
{
+ struct pci_dev *pdev = to_pci_dev(tb->nhi->dev);
struct pci_dev *upstream_port;
struct icm *icm = tb_priv(tb);
@@ -1483,7 +1484,7 @@ static bool icm_ar_is_supported(struct tb *tb)
* Find the upstream PCIe port in case we need to do reset
* through its vendor specific registers.
*/
- upstream_port = get_upstream_port(tb->nhi->pdev);
+ upstream_port = get_upstream_port(pdev);
if (upstream_port) {
int cap;
@@ -1519,7 +1520,7 @@ static int icm_ar_get_mode(struct tb *tb)
} while (--retries);
if (!retries) {
- dev_err(&nhi->pdev->dev, "ICM firmware not authenticated\n");
+ dev_err(nhi->dev, "ICM firmware not authenticated\n");
return -ENODEV;
}
@@ -1685,11 +1686,11 @@ icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level,
static void icm_icl_set_uuid(struct tb *tb)
{
- struct tb_nhi *nhi = tb->nhi;
+ struct pci_dev *pdev = to_pci_dev(tb->nhi->dev);
u32 uuid[4];
- pci_read_config_dword(nhi->pdev, VS_CAP_10, &uuid[0]);
- pci_read_config_dword(nhi->pdev, VS_CAP_11, &uuid[1]);
+ pci_read_config_dword(pdev, VS_CAP_10, &uuid[0]);
+ pci_read_config_dword(pdev, VS_CAP_11, &uuid[1]);
uuid[2] = 0xffffffff;
uuid[3] = 0xffffffff;
@@ -1866,7 +1867,7 @@ static int icm_firmware_start(struct tb *tb, struct tb_nhi *nhi)
if (icm_firmware_running(nhi))
return 0;
- dev_dbg(&nhi->pdev->dev, "starting ICM firmware\n");
+ dev_dbg(nhi->dev, "starting ICM firmware\n");
ret = icm_firmware_reset(tb, nhi);
if (ret)
@@ -1961,7 +1962,7 @@ static int icm_firmware_init(struct tb *tb)
ret = icm_firmware_start(tb, nhi);
if (ret) {
- dev_err(&nhi->pdev->dev, "could not start ICM firmware\n");
+ dev_err(nhi->dev, "could not start ICM firmware\n");
return ret;
}
@@ -1993,10 +1994,10 @@ static int icm_firmware_init(struct tb *tb)
*/
ret = icm_reset_phy_port(tb, 0);
if (ret)
- dev_warn(&nhi->pdev->dev, "failed to reset links on port0\n");
+ dev_warn(nhi->dev, "failed to reset links on port0\n");
ret = icm_reset_phy_port(tb, 1);
if (ret)
- dev_warn(&nhi->pdev->dev, "failed to reset links on port1\n");
+ dev_warn(nhi->dev, "failed to reset links on port1\n");
return 0;
}
@@ -2477,6 +2478,7 @@ static const struct tb_cm_ops icm_icl_ops = {
struct tb *icm_probe(struct tb_nhi *nhi)
{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
struct icm *icm;
struct tb *tb;
@@ -2488,7 +2490,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
INIT_DELAYED_WORK(&icm->rescan_work, icm_rescan_work);
mutex_init(&icm->request_lock);
- switch (nhi->pdev->device) {
+ switch (pdev->device) {
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
icm->can_upgrade_nvm = true;
@@ -2594,7 +2596,7 @@ struct tb *icm_probe(struct tb_nhi *nhi)
}
if (!icm->is_supported || !icm->is_supported(tb)) {
- dev_dbg(&nhi->pdev->dev, "ICM not supported on this controller\n");
+ dev_dbg(nhi->dev, "ICM not supported on this controller\n");
tb_domain_put(tb);
return NULL;
}
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 1a2051673067..59b261c078b6 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -2,7 +2,7 @@
/*
* Thunderbolt driver - NHI driver
*
- * The NHI (native host interface) is the pci device that allows us to send and
+ * The NHI (native host interface) is the device that allows us to send and
* receive frames from the thunderbolt bus.
*
* Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
@@ -12,7 +12,6 @@
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/errno.h>
-#include <linux/pci.h>
#include <linux/dma-mapping.h>
#include <linux/interrupt.h>
#include <linux/iommu.h>
@@ -51,6 +50,21 @@ static bool host_reset = true;
module_param(host_reset, bool, 0444);
MODULE_PARM_DESC(host_reset, "reset USB4 host router (default: true)");
+/**
+ * struct tb_nhi_pci - NHI device connected over PCIe
+ * @nhi: NHI device
+ * @msix_ida: Used to allocate MSI-X vectors for rings
+ */
+struct tb_nhi_pci {
+ struct tb_nhi nhi;
+ struct ida msix_ida;
+};
+
+static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
+{
+ return container_of(nhi, struct tb_nhi_pci, nhi);
+}
+
static int ring_interrupt_index(const struct tb_ring *ring)
{
int bit = ring->hop;
@@ -139,15 +153,14 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
else
new = old & ~mask;
- dev_dbg(&ring->nhi->pdev->dev,
+ dev_dbg(ring->nhi->dev,
"%s interrupt at register %#x bit %d (%#x -> %#x)\n",
active ? "enabling" : "disabling", reg, interrupt_bit, old, new);
if (new == old)
- dev_WARN(&ring->nhi->pdev->dev,
- "interrupt for %s %d is already %s\n",
- RING_TYPE(ring), ring->hop,
- str_enabled_disabled(active));
+ dev_WARN(ring->nhi->dev, "interrupt for %s %d is already %s\n",
+ RING_TYPE(ring), ring->hop,
+ str_enabled_disabled(active));
if (active)
iowrite32(new, ring->nhi->iobase + reg);
@@ -462,19 +475,21 @@ static irqreturn_t ring_msix(int irq, void *data)
static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
{
struct tb_nhi *nhi = ring->nhi;
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
unsigned long irqflags;
int ret;
- if (!nhi->pdev->msix_enabled)
+ if (!pdev->msix_enabled)
return 0;
- ret = ida_alloc_max(&nhi->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
+ ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
if (ret < 0)
return ret;
ring->vector = ret;
- ret = pci_irq_vector(ring->nhi->pdev, ring->vector);
+ ret = pci_irq_vector(pdev, ring->vector);
if (ret < 0)
goto err_ida_remove;
@@ -488,18 +503,20 @@ static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
return 0;
err_ida_remove:
- ida_free(&nhi->msix_ida, ring->vector);
+ ida_free(&nhi_pci->msix_ida, ring->vector);
return ret;
}
static void ring_release_msix(struct tb_ring *ring)
{
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
+
if (ring->irq <= 0)
return;
free_irq(ring->irq, ring);
- ida_free(&ring->nhi->msix_ida, ring->vector);
+ ida_free(&nhi_pci->msix_ida, ring->vector);
ring->vector = 0;
ring->irq = 0;
}
@@ -512,7 +529,7 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
if (nhi->quirks & QUIRK_E2E) {
start_hop = RING_FIRST_USABLE_HOPID + 1;
if (ring->flags & RING_FLAG_E2E && !ring->is_tx) {
- dev_dbg(&nhi->pdev->dev, "quirking E2E TX HopID %u -> %u\n",
+ dev_dbg(nhi->dev, "quirking E2E TX HopID %u -> %u\n",
ring->e2e_tx_hop, RING_E2E_RESERVED_HOPID);
ring->e2e_tx_hop = RING_E2E_RESERVED_HOPID;
}
@@ -543,23 +560,23 @@ static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
}
if (ring->hop > 0 && ring->hop < start_hop) {
- dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
+ dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop);
ret = -EINVAL;
goto err_unlock;
}
if (ring->hop < 0 || ring->hop >= nhi->hop_count) {
- dev_warn(&nhi->pdev->dev, "invalid hop: %d\n", ring->hop);
+ dev_warn(nhi->dev, "invalid hop: %d\n", ring->hop);
ret = -EINVAL;
goto err_unlock;
}
if (ring->is_tx && nhi->tx_rings[ring->hop]) {
- dev_warn(&nhi->pdev->dev, "TX hop %d already allocated\n",
+ dev_warn(nhi->dev, "TX hop %d already allocated\n",
ring->hop);
ret = -EBUSY;
goto err_unlock;
}
if (!ring->is_tx && nhi->rx_rings[ring->hop]) {
- dev_warn(&nhi->pdev->dev, "RX hop %d already allocated\n",
+ dev_warn(nhi->dev, "RX hop %d already allocated\n",
ring->hop);
ret = -EBUSY;
goto err_unlock;
@@ -584,7 +601,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
{
struct tb_ring *ring = NULL;
- dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n",
+ dev_dbg(nhi->dev, "allocating %s ring %d of size %d\n",
transmit ? "TX" : "RX", hop, size);
ring = kzalloc_obj(*ring);
@@ -610,9 +627,9 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
ring->start_poll = start_poll;
ring->poll_data = poll_data;
- ring->descriptors = dma_alloc_coherent(&ring->nhi->pdev->dev,
- size * sizeof(*ring->descriptors),
- &ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO);
+ ring->descriptors = dma_alloc_coherent(ring->nhi->dev,
+ size * sizeof(*ring->descriptors),
+ &ring->descriptors_dma, GFP_KERNEL | __GFP_ZERO);
if (!ring->descriptors)
goto err_free_ring;
@@ -627,7 +644,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
err_release_msix:
ring_release_msix(ring);
err_free_descs:
- dma_free_coherent(&ring->nhi->pdev->dev,
+ dma_free_coherent(ring->nhi->dev,
ring->size * sizeof(*ring->descriptors),
ring->descriptors, ring->descriptors_dma);
err_free_ring:
@@ -694,10 +711,10 @@ void tb_ring_start(struct tb_ring *ring)
if (ring->nhi->going_away)
goto err;
if (ring->running) {
- dev_WARN(&ring->nhi->pdev->dev, "ring already started\n");
+ dev_WARN(ring->nhi->dev, "ring already started\n");
goto err;
}
- dev_dbg(&ring->nhi->pdev->dev, "starting %s %d\n",
+ dev_dbg(ring->nhi->dev, "starting %s %d\n",
RING_TYPE(ring), ring->hop);
if (ring->flags & RING_FLAG_FRAME) {
@@ -734,11 +751,11 @@ void tb_ring_start(struct tb_ring *ring)
hop &= REG_RX_OPTIONS_E2E_HOP_MASK;
flags |= hop;
- dev_dbg(&ring->nhi->pdev->dev,
+ dev_dbg(ring->nhi->dev,
"enabling E2E for %s %d with TX HopID %d\n",
RING_TYPE(ring), ring->hop, ring->e2e_tx_hop);
} else {
- dev_dbg(&ring->nhi->pdev->dev, "enabling E2E for %s %d\n",
+ dev_dbg(ring->nhi->dev, "enabling E2E for %s %d\n",
RING_TYPE(ring), ring->hop);
}
@@ -772,12 +789,12 @@ void tb_ring_stop(struct tb_ring *ring)
{
spin_lock_irq(&ring->nhi->lock);
spin_lock(&ring->lock);
- dev_dbg(&ring->nhi->pdev->dev, "stopping %s %d\n",
+ dev_dbg(ring->nhi->dev, "stopping %s %d\n",
RING_TYPE(ring), ring->hop);
if (ring->nhi->going_away)
goto err;
if (!ring->running) {
- dev_WARN(&ring->nhi->pdev->dev, "%s %d already stopped\n",
+ dev_WARN(ring->nhi->dev, "%s %d already stopped\n",
RING_TYPE(ring), ring->hop);
goto err;
}
@@ -826,14 +843,14 @@ void tb_ring_free(struct tb_ring *ring)
ring->nhi->rx_rings[ring->hop] = NULL;
if (ring->running) {
- dev_WARN(&ring->nhi->pdev->dev, "%s %d still running\n",
+ dev_WARN(ring->nhi->dev, "%s %d still running\n",
RING_TYPE(ring), ring->hop);
}
spin_unlock_irq(&ring->nhi->lock);
ring_release_msix(ring);
- dma_free_coherent(&ring->nhi->pdev->dev,
+ dma_free_coherent(ring->nhi->dev,
ring->size * sizeof(*ring->descriptors),
ring->descriptors, ring->descriptors_dma);
@@ -841,7 +858,7 @@ void tb_ring_free(struct tb_ring *ring)
ring->descriptors_dma = 0;
- dev_dbg(&ring->nhi->pdev->dev, "freeing %s %d\n", RING_TYPE(ring),
+ dev_dbg(ring->nhi->dev, "freeing %s %d\n", RING_TYPE(ring),
ring->hop);
/*
@@ -940,9 +957,7 @@ static void nhi_interrupt_work(struct work_struct *work)
if ((value & (1 << (bit % 32))) == 0)
continue;
if (type == 2) {
- dev_warn(&nhi->pdev->dev,
- "RX overflow for ring %d\n",
- hop);
+ dev_warn(nhi->dev, "RX overflow for ring %d\n", hop);
continue;
}
if (type == 0)
@@ -950,7 +965,7 @@ static void nhi_interrupt_work(struct work_struct *work)
else
ring = nhi->rx_rings[hop];
if (ring == NULL) {
- dev_warn(&nhi->pdev->dev,
+ dev_warn(nhi->dev,
"got interrupt for inactive %s ring %d\n",
type ? "RX" : "TX",
hop);
@@ -1139,16 +1154,18 @@ static int nhi_runtime_resume(struct device *dev)
static void nhi_shutdown(struct tb_nhi *nhi)
{
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
int i;
- dev_dbg(&nhi->pdev->dev, "shutdown\n");
+ dev_dbg(nhi->dev, "shutdown\n");
for (i = 0; i < nhi->hop_count; i++) {
if (nhi->tx_rings[i])
- dev_WARN(&nhi->pdev->dev,
+ dev_WARN(nhi->dev,
"TX ring %d is still active\n", i);
if (nhi->rx_rings[i])
- dev_WARN(&nhi->pdev->dev,
+ dev_WARN(nhi->dev,
"RX ring %d is still active\n", i);
}
nhi_disable_interrupts(nhi);
@@ -1156,19 +1173,22 @@ static void nhi_shutdown(struct tb_nhi *nhi)
* We have to release the irq before calling flush_work. Otherwise an
* already executing IRQ handler could call schedule_work again.
*/
- if (!nhi->pdev->msix_enabled) {
- devm_free_irq(&nhi->pdev->dev, nhi->pdev->irq, nhi);
+ if (!pdev->msix_enabled) {
+ devm_free_irq(nhi->dev, pdev->irq, nhi);
flush_work(&nhi->interrupt_work);
}
- ida_destroy(&nhi->msix_ida);
+ ida_destroy(&nhi_pci->msix_ida);
if (nhi->ops && nhi->ops->shutdown)
nhi->ops->shutdown(nhi);
}
-static void nhi_check_quirks(struct tb_nhi *nhi)
+static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci)
{
- if (nhi->pdev->vendor == PCI_VENDOR_ID_INTEL) {
+ struct tb_nhi *nhi = &nhi_pci->nhi;
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
/*
* Intel hardware supports auto clear of the interrupt
* status register right after interrupt is being
@@ -1176,7 +1196,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
*/
nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
- switch (nhi->pdev->device) {
+ switch (pdev->device) {
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
/*
@@ -1190,7 +1210,7 @@ static void nhi_check_quirks(struct tb_nhi *nhi)
}
}
-static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
+static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data)
{
if (!pdev->external_facing ||
!device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
@@ -1199,9 +1219,11 @@ static int nhi_check_iommu_pdev(struct pci_dev *pdev, void *data)
return 1; /* Stop walking */
}
-static void nhi_check_iommu(struct tb_nhi *nhi)
+static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci)
{
- struct pci_bus *bus = nhi->pdev->bus;
+ struct tb_nhi *nhi = &nhi_pci->nhi;
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+ struct pci_bus *bus = pdev->bus;
bool port_ok = false;
/*
@@ -1224,10 +1246,10 @@ static void nhi_check_iommu(struct tb_nhi *nhi)
while (bus->parent)
bus = bus->parent;
- pci_walk_bus(bus, nhi_check_iommu_pdev, &port_ok);
+ pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok);
nhi->iommu_dma_protection = port_ok;
- dev_dbg(&nhi->pdev->dev, "IOMMU DMA protection is %s\n",
+ dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
str_enabled_disabled(port_ok));
}
@@ -1242,7 +1264,7 @@ static void nhi_reset(struct tb_nhi *nhi)
return;
if (!host_reset) {
- dev_dbg(&nhi->pdev->dev, "skipping host router reset\n");
+ dev_dbg(nhi->dev, "skipping host router reset\n");
return;
}
@@ -1253,27 +1275,23 @@ static void nhi_reset(struct tb_nhi *nhi)
do {
val = ioread32(nhi->iobase + REG_RESET);
if (!(val & REG_RESET_HRR)) {
- dev_warn(&nhi->pdev->dev, "host router reset successful\n");
+ dev_warn(nhi->dev, "host router reset successful\n");
return;
}
usleep_range(10, 20);
} while (ktime_before(ktime_get(), timeout));
- dev_warn(&nhi->pdev->dev, "timeout resetting host router\n");
+ dev_warn(nhi->dev, "timeout resetting host router\n");
}
-static int nhi_init_msi(struct tb_nhi *nhi)
+static int nhi_init_msi(struct tb_nhi_pci *nhi_pci)
{
- struct pci_dev *pdev = nhi->pdev;
+ struct tb_nhi *nhi = &nhi_pci->nhi;
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
struct device *dev = &pdev->dev;
int res, irq, nvec;
- /* In case someone left them on. */
- nhi_disable_interrupts(nhi);
-
- nhi_enable_int_throttling(nhi);
-
- ida_init(&nhi->msix_ida);
+ ida_init(&nhi_pci->msix_ida);
/*
* The NHI has 16 MSI-X vectors or a single MSI. We first try to
@@ -1290,7 +1308,7 @@ static int nhi_init_msi(struct tb_nhi *nhi)
INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
- irq = pci_irq_vector(nhi->pdev, 0);
+ irq = pci_irq_vector(pdev, 0);
if (irq < 0)
return irq;
@@ -1339,6 +1357,7 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi)
static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct device *dev = &pdev->dev;
+ struct tb_nhi_pci *nhi_pci;
struct tb_nhi *nhi;
struct tb *tb;
int res;
@@ -1350,11 +1369,12 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (res)
return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
- nhi = devm_kzalloc(&pdev->dev, sizeof(*nhi), GFP_KERNEL);
- if (!nhi)
+ nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
+ if (!nhi_pci)
return -ENOMEM;
- nhi->pdev = pdev;
+ nhi = &nhi_pci->nhi;
+ nhi->dev = dev;
nhi->ops = (const struct tb_nhi_ops *)id->driver_data;
nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
@@ -1372,11 +1392,15 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (!nhi->tx_rings || !nhi->rx_rings)
return -ENOMEM;
- nhi_check_quirks(nhi);
- nhi_check_iommu(nhi);
+ nhi_check_quirks(nhi_pci);
+ nhi_check_iommu(nhi_pci);
nhi_reset(nhi);
- res = nhi_init_msi(nhi);
+ /* In case someone left them on. */
+ nhi_disable_interrupts(nhi);
+ nhi_enable_int_throttling(nhi);
+
+ res = nhi_init_msi(nhi_pci);
if (res)
return dev_err_probe(dev, res, "cannot enable MSI, aborting\n");
diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c
index 96da07e88c52..8c50066f3411 100644
--- a/drivers/thunderbolt/nhi_ops.c
+++ b/drivers/thunderbolt/nhi_ops.c
@@ -24,7 +24,7 @@ static int check_for_device(struct device *dev, void *data)
static bool icl_nhi_is_device_connected(struct tb_nhi *nhi)
{
- struct tb *tb = pci_get_drvdata(nhi->pdev);
+ struct tb *tb = dev_get_drvdata(nhi->dev);
int ret;
ret = device_for_each_child(&tb->root_switch->dev, NULL,
@@ -34,6 +34,7 @@ static bool icl_nhi_is_device_connected(struct tb_nhi *nhi)
static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
u32 vs_cap;
/*
@@ -48,7 +49,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
* The actual power management happens inside shared ACPI power
* resources using standard ACPI methods.
*/
- pci_read_config_dword(nhi->pdev, VS_CAP_22, &vs_cap);
+ pci_read_config_dword(pdev, VS_CAP_22, &vs_cap);
if (power) {
vs_cap &= ~VS_CAP_22_DMA_DELAY_MASK;
vs_cap |= 0x22 << VS_CAP_22_DMA_DELAY_SHIFT;
@@ -56,7 +57,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
} else {
vs_cap &= ~VS_CAP_22_FORCE_POWER;
}
- pci_write_config_dword(nhi->pdev, VS_CAP_22, vs_cap);
+ pci_write_config_dword(pdev, VS_CAP_22, vs_cap);
if (power) {
unsigned int retries = 350;
@@ -64,7 +65,7 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
/* Wait until the firmware tells it is up and running */
do {
- pci_read_config_dword(nhi->pdev, VS_CAP_9, &val);
+ pci_read_config_dword(pdev, VS_CAP_9, &val);
if (val & VS_CAP_9_FW_READY)
return 0;
usleep_range(3000, 3100);
@@ -78,14 +79,16 @@ static int icl_nhi_force_power(struct tb_nhi *nhi, bool power)
static void icl_nhi_lc_mailbox_cmd(struct tb_nhi *nhi, enum icl_lc_mailbox_cmd cmd)
{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
u32 data;
data = (cmd << VS_CAP_19_CMD_SHIFT) & VS_CAP_19_CMD_MASK;
- pci_write_config_dword(nhi->pdev, VS_CAP_19, data | VS_CAP_19_VALID);
+ pci_write_config_dword(pdev, VS_CAP_19, data | VS_CAP_19_VALID);
}
static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
unsigned long end;
u32 data;
@@ -94,7 +97,7 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
end = jiffies + msecs_to_jiffies(timeout);
do {
- pci_read_config_dword(nhi->pdev, VS_CAP_18, &data);
+ pci_read_config_dword(pdev, VS_CAP_18, &data);
if (data & VS_CAP_18_DONE)
goto clear;
usleep_range(1000, 1100);
@@ -104,24 +107,25 @@ static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout)
clear:
/* Clear the valid bit */
- pci_write_config_dword(nhi->pdev, VS_CAP_19, 0);
+ pci_write_config_dword(pdev, VS_CAP_19, 0);
return 0;
}
static void icl_nhi_set_ltr(struct tb_nhi *nhi)
{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
u32 max_ltr, ltr;
- pci_read_config_dword(nhi->pdev, VS_CAP_16, &max_ltr);
+ pci_read_config_dword(pdev, VS_CAP_16, &max_ltr);
max_ltr &= 0xffff;
/* Program the same value for both snoop and no-snoop */
ltr = max_ltr << 16 | max_ltr;
- pci_write_config_dword(nhi->pdev, VS_CAP_15, ltr);
+ pci_write_config_dword(pdev, VS_CAP_15, ltr);
}
static int icl_nhi_suspend(struct tb_nhi *nhi)
{
- struct tb *tb = pci_get_drvdata(nhi->pdev);
+ struct tb *tb = dev_get_drvdata(nhi->dev);
int ret;
if (icl_nhi_is_device_connected(nhi))
@@ -144,7 +148,7 @@ static int icl_nhi_suspend(struct tb_nhi *nhi)
static int icl_nhi_suspend_noirq(struct tb_nhi *nhi, bool wakeup)
{
- struct tb *tb = pci_get_drvdata(nhi->pdev);
+ struct tb *tb = dev_get_drvdata(nhi->dev);
enum icl_lc_mailbox_cmd cmd;
if (!pm_suspend_via_firmware())
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index bfcab98faf4b..1817bb7afd33 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -211,6 +211,7 @@ static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
{
+ struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
struct pci_dev *root_port;
/*
@@ -219,16 +220,17 @@ static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
* itself. To be on the safe side keep the root port in D0 during
* the whole upgrade process.
*/
- root_port = pcie_find_root_port(sw->tb->nhi->pdev);
+ root_port = pcie_find_root_port(pdev);
if (root_port)
pm_runtime_get_noresume(&root_port->dev);
}
static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
{
+ struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
struct pci_dev *root_port;
- root_port = pcie_find_root_port(sw->tb->nhi->pdev);
+ root_port = pcie_find_root_port(pdev);
if (root_port)
pm_runtime_put(&root_port->dev);
}
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index a9d26a2ec259..e09259b35d40 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -3307,13 +3307,14 @@ static const struct tb_cm_ops tb_cm_ops = {
*/
static bool tb_apple_add_links(struct tb_nhi *nhi)
{
+ struct pci_dev *nhi_pdev = to_pci_dev(nhi->dev);
struct pci_dev *upstream, *pdev;
bool ret;
if (!x86_apple_machine)
return false;
- switch (nhi->pdev->device) {
+ switch (nhi_pdev->device) {
case PCI_DEVICE_ID_INTEL_LIGHT_RIDGE:
case PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C:
case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
@@ -3323,7 +3324,7 @@ static bool tb_apple_add_links(struct tb_nhi *nhi)
return false;
}
- upstream = pci_upstream_bridge(nhi->pdev);
+ upstream = pci_upstream_bridge(nhi_pdev);
while (upstream) {
if (!pci_is_pcie(upstream))
return false;
@@ -3350,15 +3351,15 @@ static bool tb_apple_add_links(struct tb_nhi *nhi)
!pdev->is_pciehp)
continue;
- link = device_link_add(&pdev->dev, &nhi->pdev->dev,
+ link = device_link_add(&pdev->dev, nhi->dev,
DL_FLAG_AUTOREMOVE_SUPPLIER |
DL_FLAG_PM_RUNTIME);
if (link) {
- dev_dbg(&nhi->pdev->dev, "created link from %s\n",
+ dev_dbg(nhi->dev, "created link from %s\n",
dev_name(&pdev->dev));
ret = true;
} else {
- dev_warn(&nhi->pdev->dev, "device link creation from %s failed\n",
+ dev_warn(nhi->dev, "device link creation from %s failed\n",
dev_name(&pdev->dev));
}
}
diff --git a/drivers/thunderbolt/tb.h b/drivers/thunderbolt/tb.h
index 229b9e7961fb..f11a131fb6e3 100644
--- a/drivers/thunderbolt/tb.h
+++ b/drivers/thunderbolt/tb.h
@@ -725,11 +725,11 @@ static inline int tb_port_write(struct tb_port *port, const void *buffer,
length);
}
-#define tb_err(tb, fmt, arg...) dev_err(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_WARN(tb, fmt, arg...) dev_WARN(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_warn(tb, fmt, arg...) dev_warn(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_info(tb, fmt, arg...) dev_info(&(tb)->nhi->pdev->dev, fmt, ## arg)
-#define tb_dbg(tb, fmt, arg...) dev_dbg(&(tb)->nhi->pdev->dev, fmt, ## arg)
+#define tb_err(tb, fmt, arg...) dev_err((tb)->nhi->dev, fmt, ## arg)
+#define tb_WARN(tb, fmt, arg...) dev_WARN((tb)->nhi->dev, fmt, ## arg)
+#define tb_warn(tb, fmt, arg...) dev_warn((tb)->nhi->dev, fmt, ## arg)
+#define tb_info(tb, fmt, arg...) dev_info((tb)->nhi->dev, fmt, ## arg)
+#define tb_dbg(tb, fmt, arg...) dev_dbg((tb)->nhi->dev, fmt, ## arg)
#define __TB_SW_PRINT(level, sw, fmt, arg...) \
do { \
diff --git a/drivers/thunderbolt/usb4_port.c b/drivers/thunderbolt/usb4_port.c
index c32d3516e780..890de530debc 100644
--- a/drivers/thunderbolt/usb4_port.c
+++ b/drivers/thunderbolt/usb4_port.c
@@ -138,7 +138,7 @@ bool usb4_usb3_port_match(struct device *usb4_port_dev,
return false;
/* Check if USB3 fwnode references same NHI where USB4 port resides */
- if (!device_match_fwnode(&nhi->pdev->dev, nhi_fwnode))
+ if (!device_match_fwnode(nhi->dev, nhi_fwnode))
return false;
if (fwnode_property_read_u8(usb3_port_fwnode, "usb4-port-number", &usb4_port_num))
diff --git a/include/linux/thunderbolt.h b/include/linux/thunderbolt.h
index bbdbbc84c999..89e8e374dcdb 100644
--- a/include/linux/thunderbolt.h
+++ b/include/linux/thunderbolt.h
@@ -482,12 +482,11 @@ static inline struct tb_xdomain *tb_service_parent(struct tb_service *svc)
* struct tb_nhi - thunderbolt native host interface
* @lock: Must be held during ring creation/destruction. Is acquired by
* interrupt_work when dispatching interrupts to individual rings.
- * @pdev: Pointer to the PCI device
+ * @dev: Device associated with this NHI instance
* @ops: NHI specific optional ops
* @iobase: MMIO space of the NHI
* @tx_rings: All Tx rings available on this host controller
* @rx_rings: All Rx rings available on this host controller
- * @msix_ida: Used to allocate MSI-X vectors for rings
* @going_away: The host controller device is about to disappear so when
* this flag is set, avoid touching the hardware anymore.
* @iommu_dma_protection: An IOMMU will isolate external-facing ports.
@@ -499,12 +498,11 @@ static inline struct tb_xdomain *tb_service_parent(struct tb_service *svc)
*/
struct tb_nhi {
spinlock_t lock;
- struct pci_dev *pdev;
+ struct device *dev;
const struct tb_nhi_ops *ops;
void __iomem *iobase;
struct tb_ring **tx_rings;
struct tb_ring **rx_rings;
- struct ida msix_ida;
bool going_away;
bool iommu_dma_protection;
struct work_struct interrupt_work;
@@ -685,7 +683,7 @@ void tb_ring_poll_complete(struct tb_ring *ring);
*/
static inline struct device *tb_ring_dma_device(struct tb_ring *ring)
{
- return &ring->nhi->pdev->dev;
+ return ring->nhi->dev;
}
bool usb4_usb3_port_match(struct device *usb4_port_dev,
--
2.54.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 2/4] thunderbolt: Separate out common NHI bits
2026-05-13 16:23 [PATCH v3 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
@ 2026-05-13 16:23 ` Konrad Dybcio
2026-05-15 6:28 ` Mika Westerberg
2026-05-13 16:23 ` [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 4/4] thunderbolt: Add some more descriptive probe error messages Konrad Dybcio
3 siblings, 1 reply; 9+ messages in thread
From: Konrad Dybcio @ 2026-05-13 16:23 UTC (permalink / raw)
To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
Konrad Dybcio
From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Add a new file encapsulating most of the PCI NHI specifics
(intentionally leaving some odd cookies behind to make the layering
simpler). Most notably, separate out nhi_probe() to make it easier to
register other types of NHIs.
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
drivers/thunderbolt/Makefile | 2 +-
drivers/thunderbolt/nhi.c | 463 +++++-------------------------------------
drivers/thunderbolt/nhi.h | 31 +++
drivers/thunderbolt/nhi_ops.c | 9 +
drivers/thunderbolt/pci.c | 439 +++++++++++++++++++++++++++++++++++++++
drivers/thunderbolt/pci.h | 19 ++
drivers/thunderbolt/switch.c | 43 ++--
7 files changed, 558 insertions(+), 448 deletions(-)
diff --git a/drivers/thunderbolt/Makefile b/drivers/thunderbolt/Makefile
index b44b32dcb832..eb1bfc5e5c3c 100644
--- a/drivers/thunderbolt/Makefile
+++ b/drivers/thunderbolt/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
ccflags-y := -I$(src)
obj-${CONFIG_USB4} := thunderbolt.o
-thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o
+thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o pci.o path.o tunnel.o eeprom.o
thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o tmu.o usb4.o
thunderbolt-objs += usb4_port.o nvm.o retimer.o quirks.o clx.o
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 59b261c078b6..740c10ee852b 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -33,38 +33,13 @@
* transferred.
*/
#define RING_E2E_RESERVED_HOPID RING_FIRST_USABLE_HOPID
-/*
- * Minimal number of vectors when we use MSI-X. Two for control channel
- * Rx/Tx and the rest four are for cross domain DMA paths.
- */
-#define MSIX_MIN_VECS 6
-#define MSIX_MAX_VECS 16
#define NHI_MAILBOX_TIMEOUT 500 /* ms */
-/* Host interface quirks */
-#define QUIRK_AUTO_CLEAR_INT BIT(0)
-#define QUIRK_E2E BIT(1)
-
static bool host_reset = true;
module_param(host_reset, bool, 0444);
MODULE_PARM_DESC(host_reset, "reset USB4 host router (default: true)");
-/**
- * struct tb_nhi_pci - NHI device connected over PCIe
- * @nhi: NHI device
- * @msix_ida: Used to allocate MSI-X vectors for rings
- */
-struct tb_nhi_pci {
- struct tb_nhi nhi;
- struct ida msix_ida;
-};
-
-static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
-{
- return container_of(nhi, struct tb_nhi_pci, nhi);
-}
-
static int ring_interrupt_index(const struct tb_ring *ring)
{
int bit = ring->hop;
@@ -173,7 +148,7 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
*
* Use only during init and shutdown.
*/
-static void nhi_disable_interrupts(struct tb_nhi *nhi)
+void nhi_disable_interrupts(struct tb_nhi *nhi)
{
int i = 0;
/* disable interrupts */
@@ -458,7 +433,7 @@ static void ring_clear_msix(const struct tb_ring *ring)
4 * (ring->nhi->hop_count / 32));
}
-static irqreturn_t ring_msix(int irq, void *data)
+irqreturn_t ring_msix(int irq, void *data)
{
struct tb_ring *ring = data;
@@ -472,55 +447,6 @@ static irqreturn_t ring_msix(int irq, void *data)
return IRQ_HANDLED;
}
-static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
-{
- struct tb_nhi *nhi = ring->nhi;
- struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
- struct pci_dev *pdev = to_pci_dev(nhi->dev);
- unsigned long irqflags;
- int ret;
-
- if (!pdev->msix_enabled)
- return 0;
-
- ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
- if (ret < 0)
- return ret;
-
- ring->vector = ret;
-
- ret = pci_irq_vector(pdev, ring->vector);
- if (ret < 0)
- goto err_ida_remove;
-
- ring->irq = ret;
-
- irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
- ret = request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
- if (ret)
- goto err_ida_remove;
-
- return 0;
-
-err_ida_remove:
- ida_free(&nhi_pci->msix_ida, ring->vector);
-
- return ret;
-}
-
-static void ring_release_msix(struct tb_ring *ring)
-{
- struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
-
- if (ring->irq <= 0)
- return;
-
- free_irq(ring->irq, ring);
- ida_free(&nhi_pci->msix_ida, ring->vector);
- ring->vector = 0;
- ring->irq = 0;
-}
-
static int nhi_alloc_hop(struct tb_nhi *nhi, struct tb_ring *ring)
{
unsigned int start_hop = RING_FIRST_USABLE_HOPID;
@@ -633,8 +559,10 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
if (!ring->descriptors)
goto err_free_ring;
- if (ring_request_msix(ring, flags & RING_FLAG_NO_SUSPEND))
- goto err_free_descs;
+ if (nhi->ops && nhi->ops->request_ring_irq) {
+ if (nhi->ops->request_ring_irq(ring, flags & RING_FLAG_NO_SUSPEND))
+ goto err_free_descs;
+ }
if (nhi_alloc_hop(nhi, ring))
goto err_release_msix;
@@ -642,7 +570,8 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
return ring;
err_release_msix:
- ring_release_msix(ring);
+ if (nhi->ops && nhi->ops->release_ring_irq)
+ nhi->ops->release_ring_irq(ring);
err_free_descs:
dma_free_coherent(ring->nhi->dev,
ring->size * sizeof(*ring->descriptors),
@@ -832,6 +761,8 @@ EXPORT_SYMBOL_GPL(tb_ring_stop);
*/
void tb_ring_free(struct tb_ring *ring)
{
+ struct tb_nhi *nhi = ring->nhi;
+
spin_lock_irq(&ring->nhi->lock);
/*
* Dissociate the ring from the NHI. This also ensures that
@@ -848,7 +779,8 @@ void tb_ring_free(struct tb_ring *ring)
}
spin_unlock_irq(&ring->nhi->lock);
- ring_release_msix(ring);
+ if (nhi->ops && nhi->ops->release_ring_irq)
+ nhi->ops->release_ring_irq(ring);
dma_free_coherent(ring->nhi->dev,
ring->size * sizeof(*ring->descriptors),
@@ -929,7 +861,7 @@ enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi)
return (enum nhi_fw_mode)val;
}
-static void nhi_interrupt_work(struct work_struct *work)
+void nhi_interrupt_work(struct work_struct *work)
{
struct tb_nhi *nhi = container_of(work, typeof(*nhi), interrupt_work);
int value = 0; /* Suppress uninitialized usage warning. */
@@ -979,7 +911,7 @@ static void nhi_interrupt_work(struct work_struct *work)
spin_unlock_irq(&nhi->lock);
}
-static irqreturn_t nhi_msi(int irq, void *data)
+irqreturn_t nhi_msi(int irq, void *data)
{
struct tb_nhi *nhi = data;
schedule_work(&nhi->interrupt_work);
@@ -988,8 +920,7 @@ static irqreturn_t nhi_msi(int irq, void *data)
static int __nhi_suspend_noirq(struct device *dev, bool wakeup)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
struct tb_nhi *nhi = tb->nhi;
int ret;
@@ -1013,21 +944,19 @@ static int nhi_suspend_noirq(struct device *dev)
static int nhi_freeze_noirq(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
return tb_domain_freeze_noirq(tb);
}
static int nhi_thaw_noirq(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
return tb_domain_thaw_noirq(tb);
}
-static bool nhi_wake_supported(struct pci_dev *pdev)
+static bool nhi_wake_supported(struct device *dev)
{
u8 val;
@@ -1035,7 +964,7 @@ static bool nhi_wake_supported(struct pci_dev *pdev)
* If power rails are sustainable for wakeup from S4 this
* property is set by the BIOS.
*/
- if (!device_property_read_u8(&pdev->dev, "WAKE_SUPPORTED", &val))
+ if (!device_property_read_u8(dev, "WAKE_SUPPORTED", &val))
return !!val;
return true;
@@ -1043,14 +972,13 @@ static bool nhi_wake_supported(struct pci_dev *pdev)
static int nhi_poweroff_noirq(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
bool wakeup;
- wakeup = device_may_wakeup(dev) && nhi_wake_supported(pdev);
+ wakeup = device_may_wakeup(dev) && nhi_wake_supported(dev);
return __nhi_suspend_noirq(dev, wakeup);
}
-static void nhi_enable_int_throttling(struct tb_nhi *nhi)
+void nhi_enable_int_throttling(struct tb_nhi *nhi)
{
/* Throttling is specified in 256ns increments */
u32 throttle = DIV_ROUND_UP(128 * NSEC_PER_USEC, 256);
@@ -1068,8 +996,7 @@ static void nhi_enable_int_throttling(struct tb_nhi *nhi)
static int nhi_resume_noirq(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
struct tb_nhi *nhi = tb->nhi;
int ret;
@@ -1078,7 +1005,7 @@ static int nhi_resume_noirq(struct device *dev)
* unplugged last device which causes the host controller to go
* away on PCs.
*/
- if (!pci_device_is_present(pdev)) {
+ if ((nhi->ops->is_present && !nhi->ops->is_present(nhi))) {
nhi->going_away = true;
} else {
if (nhi->ops && nhi->ops->resume_noirq) {
@@ -1094,32 +1021,29 @@ static int nhi_resume_noirq(struct device *dev)
static int nhi_suspend(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
return tb_domain_suspend(tb);
}
static void nhi_complete(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
/*
* If we were runtime suspended when system suspend started,
* schedule runtime resume now. It should bring the domain back
* to functional state.
*/
- if (pm_runtime_suspended(&pdev->dev))
- pm_runtime_resume(&pdev->dev);
+ if (pm_runtime_suspended(dev))
+ pm_runtime_resume(dev);
else
tb_domain_complete(tb);
}
static int nhi_runtime_suspend(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
struct tb_nhi *nhi = tb->nhi;
int ret;
@@ -1137,8 +1061,7 @@ static int nhi_runtime_suspend(struct device *dev)
static int nhi_runtime_resume(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct tb *tb = pci_get_drvdata(pdev);
+ struct tb *tb = dev_get_drvdata(dev);
struct tb_nhi *nhi = tb->nhi;
int ret;
@@ -1152,10 +1075,8 @@ static int nhi_runtime_resume(struct device *dev)
return tb_domain_runtime_resume(tb);
}
-static void nhi_shutdown(struct tb_nhi *nhi)
+void nhi_shutdown(struct tb_nhi *nhi)
{
- struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
- struct pci_dev *pdev = to_pci_dev(nhi->dev);
int i;
dev_dbg(nhi->dev, "shutdown\n");
@@ -1169,90 +1090,11 @@ static void nhi_shutdown(struct tb_nhi *nhi)
"RX ring %d is still active\n", i);
}
nhi_disable_interrupts(nhi);
- /*
- * We have to release the irq before calling flush_work. Otherwise an
- * already executing IRQ handler could call schedule_work again.
- */
- if (!pdev->msix_enabled) {
- devm_free_irq(nhi->dev, pdev->irq, nhi);
- flush_work(&nhi->interrupt_work);
- }
- ida_destroy(&nhi_pci->msix_ida);
if (nhi->ops && nhi->ops->shutdown)
nhi->ops->shutdown(nhi);
}
-static void nhi_check_quirks(struct tb_nhi_pci *nhi_pci)
-{
- struct tb_nhi *nhi = &nhi_pci->nhi;
- struct pci_dev *pdev = to_pci_dev(nhi->dev);
-
- if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
- /*
- * Intel hardware supports auto clear of the interrupt
- * status register right after interrupt is being
- * issued.
- */
- nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
-
- switch (pdev->device) {
- case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
- case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
- /*
- * Falcon Ridge controller needs the end-to-end
- * flow control workaround to avoid losing Rx
- * packets when RING_FLAG_E2E is set.
- */
- nhi->quirks |= QUIRK_E2E;
- break;
- }
- }
-}
-
-static int nhi_check_iommu_pci_dev(struct pci_dev *pdev, void *data)
-{
- if (!pdev->external_facing ||
- !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
- return 0;
- *(bool *)data = true;
- return 1; /* Stop walking */
-}
-
-static void nhi_check_iommu(struct tb_nhi_pci *nhi_pci)
-{
- struct tb_nhi *nhi = &nhi_pci->nhi;
- struct pci_dev *pdev = to_pci_dev(nhi->dev);
- struct pci_bus *bus = pdev->bus;
- bool port_ok = false;
-
- /*
- * Ideally what we'd do here is grab every PCI device that
- * represents a tunnelling adapter for this NHI and check their
- * status directly, but unfortunately USB4 seems to make it
- * obnoxiously difficult to reliably make any correlation.
- *
- * So for now we'll have to bodge it... Hoping that the system
- * is at least sane enough that an adapter is in the same PCI
- * segment as its NHI, if we can find *something* on that segment
- * which meets the requirements for Kernel DMA Protection, we'll
- * take that to imply that firmware is aware and has (hopefully)
- * done the right thing in general. We need to know that the PCI
- * layer has seen the ExternalFacingPort property which will then
- * inform the IOMMU layer to enforce the complete "untrusted DMA"
- * flow, but also that the IOMMU driver itself can be trusted not
- * to have been subverted by a pre-boot DMA attack.
- */
- while (bus->parent)
- bus = bus->parent;
-
- pci_walk_bus(bus, nhi_check_iommu_pci_dev, &port_ok);
-
- nhi->iommu_dma_protection = port_ok;
- dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
- str_enabled_disabled(port_ok));
-}
-
static void nhi_reset(struct tb_nhi *nhi)
{
ktime_t timeout;
@@ -1284,53 +1126,6 @@ static void nhi_reset(struct tb_nhi *nhi)
dev_warn(nhi->dev, "timeout resetting host router\n");
}
-static int nhi_init_msi(struct tb_nhi_pci *nhi_pci)
-{
- struct tb_nhi *nhi = &nhi_pci->nhi;
- struct pci_dev *pdev = to_pci_dev(nhi->dev);
- struct device *dev = &pdev->dev;
- int res, irq, nvec;
-
- ida_init(&nhi_pci->msix_ida);
-
- /*
- * The NHI has 16 MSI-X vectors or a single MSI. We first try to
- * get all MSI-X vectors and if we succeed, each ring will have
- * one MSI-X. If for some reason that does not work out, we
- * fallback to a single MSI.
- */
- nvec = pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS,
- PCI_IRQ_MSIX);
- if (nvec < 0) {
- nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
- if (nvec < 0)
- return nvec;
-
- INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
-
- irq = pci_irq_vector(pdev, 0);
- if (irq < 0)
- return irq;
-
- res = devm_request_irq(&pdev->dev, irq, nhi_msi,
- IRQF_NO_SUSPEND, "thunderbolt", nhi);
- if (res)
- return dev_err_probe(dev, res, "request_irq failed, aborting\n");
- }
-
- return 0;
-}
-
-static bool nhi_imr_valid(struct pci_dev *pdev)
-{
- u8 val;
-
- if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val))
- return !!val;
-
- return true;
-}
-
static struct tb *nhi_select_cm(struct tb_nhi *nhi)
{
struct tb *tb;
@@ -1354,64 +1149,40 @@ static struct tb *nhi_select_cm(struct tb_nhi *nhi)
return tb;
}
-static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+int nhi_probe(struct tb_nhi *nhi)
{
- struct device *dev = &pdev->dev;
- struct tb_nhi_pci *nhi_pci;
- struct tb_nhi *nhi;
+ struct device *dev = nhi->dev;
struct tb *tb;
int res;
- if (!nhi_imr_valid(pdev))
- return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n");
-
- res = pcim_enable_device(pdev);
- if (res)
- return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
-
- nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
- if (!nhi_pci)
- return -ENOMEM;
-
- nhi = &nhi_pci->nhi;
- nhi->dev = dev;
- nhi->ops = (const struct tb_nhi_ops *)id->driver_data;
-
- nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
- res = PTR_ERR_OR_ZERO(nhi->iobase);
- if (res)
- return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n");
-
nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff;
dev_dbg(dev, "total paths: %d\n", nhi->hop_count);
- nhi->tx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
+ nhi->tx_rings = devm_kcalloc(dev, nhi->hop_count,
sizeof(*nhi->tx_rings), GFP_KERNEL);
- nhi->rx_rings = devm_kcalloc(&pdev->dev, nhi->hop_count,
+ nhi->rx_rings = devm_kcalloc(dev, nhi->hop_count,
sizeof(*nhi->rx_rings), GFP_KERNEL);
if (!nhi->tx_rings || !nhi->rx_rings)
return -ENOMEM;
- nhi_check_quirks(nhi_pci);
- nhi_check_iommu(nhi_pci);
nhi_reset(nhi);
/* In case someone left them on. */
nhi_disable_interrupts(nhi);
nhi_enable_int_throttling(nhi);
- res = nhi_init_msi(nhi_pci);
- if (res)
- return dev_err_probe(dev, res, "cannot enable MSI, aborting\n");
+ if (nhi->ops && nhi->ops->init_interrupts) {
+ res = nhi->ops->init_interrupts(nhi);
+ if (res)
+ return dev_err_probe(dev, res, "cannot enable interrupts, aborting\n");
+ }
spin_lock_init(&nhi->lock);
- res = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+ res = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (res)
return dev_err_probe(dev, res, "failed to set DMA mask\n");
- pci_set_master(pdev);
-
if (nhi->ops && nhi->ops->init) {
res = nhi->ops->init(nhi);
if (res)
@@ -1438,38 +1209,24 @@ static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id)
nhi_shutdown(nhi);
return res;
}
- pci_set_drvdata(pdev, tb);
+ dev_set_drvdata(dev, tb);
- device_wakeup_enable(&pdev->dev);
+ device_wakeup_enable(dev);
- pm_runtime_allow(&pdev->dev);
- pm_runtime_set_autosuspend_delay(&pdev->dev, TB_AUTOSUSPEND_DELAY);
- pm_runtime_use_autosuspend(&pdev->dev);
- pm_runtime_put_autosuspend(&pdev->dev);
+ pm_runtime_allow(dev);
+ pm_runtime_set_autosuspend_delay(dev, TB_AUTOSUSPEND_DELAY);
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_put_autosuspend(dev);
return 0;
}
-static void nhi_remove(struct pci_dev *pdev)
-{
- struct tb *tb = pci_get_drvdata(pdev);
- struct tb_nhi *nhi = tb->nhi;
-
- pm_runtime_get_sync(&pdev->dev);
- pm_runtime_dont_use_autosuspend(&pdev->dev);
- pm_runtime_forbid(&pdev->dev);
-
- tb_domain_remove(tb);
- wait_for_completion(&nhi->domain_released);
- nhi_shutdown(nhi);
-}
-
/*
* The tunneled pci bridges are siblings of us. Use resume_noirq to reenable
* the tunnels asap. A corresponding pci quirk blocks the downstream bridges
* resume_noirq until we are done.
*/
-static const struct dev_pm_ops nhi_pm_ops = {
+const struct dev_pm_ops nhi_pm_ops = {
.suspend_noirq = nhi_suspend_noirq,
.resume_noirq = nhi_resume_noirq,
.freeze_noirq = nhi_freeze_noirq, /*
@@ -1485,129 +1242,3 @@ static const struct dev_pm_ops nhi_pm_ops = {
.runtime_suspend = nhi_runtime_suspend,
.runtime_resume = nhi_runtime_resume,
};
-
-static struct pci_device_id nhi_ids[] = {
- /*
- * We have to specify class, the TB bridges use the same device and
- * vendor (sub)id on gen 1 and gen 2 controllers.
- */
- {
- .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
- .vendor = PCI_VENDOR_ID_INTEL,
- .device = PCI_DEVICE_ID_INTEL_LIGHT_RIDGE,
- .subvendor = 0x2222, .subdevice = 0x1111,
- },
- {
- .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
- .vendor = PCI_VENDOR_ID_INTEL,
- .device = PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C,
- .subvendor = 0x2222, .subdevice = 0x1111,
- },
- {
- .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
- .vendor = PCI_VENDOR_ID_INTEL,
- .device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI,
- .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
- },
- {
- .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
- .vendor = PCI_VENDOR_ID_INTEL,
- .device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI,
- .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
- },
-
- /* Thunderbolt 3 */
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- /* Thunderbolt 4 */
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0),
- .driver_data = (kernel_ulong_t)&icl_nhi_ops },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
- { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
-
- /* Any USB4 compliant host */
- { PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
-
- { 0,}
-};
-
-MODULE_DEVICE_TABLE(pci, nhi_ids);
-MODULE_DESCRIPTION("Thunderbolt/USB4 core driver");
-MODULE_LICENSE("GPL");
-
-static struct pci_driver nhi_driver = {
- .name = "thunderbolt",
- .id_table = nhi_ids,
- .probe = nhi_probe,
- .remove = nhi_remove,
- .shutdown = nhi_remove,
- .driver.pm = &nhi_pm_ops,
-};
-
-static int __init nhi_init(void)
-{
- int ret;
-
- ret = tb_domain_init();
- if (ret)
- return ret;
- ret = pci_register_driver(&nhi_driver);
- if (ret)
- tb_domain_exit();
- return ret;
-}
-
-static void __exit nhi_unload(void)
-{
- pci_unregister_driver(&nhi_driver);
- tb_domain_exit();
-}
-
-rootfs_initcall(nhi_init);
-module_exit(nhi_unload);
diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
index 24ac4246d0ca..ef8defeaed33 100644
--- a/drivers/thunderbolt/nhi.h
+++ b/drivers/thunderbolt/nhi.h
@@ -29,6 +29,14 @@ enum nhi_mailbox_cmd {
int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data);
enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi);
+void nhi_enable_int_throttling(struct tb_nhi *nhi);
+void nhi_disable_interrupts(struct tb_nhi *nhi);
+void nhi_interrupt_work(struct work_struct *work);
+irqreturn_t nhi_msi(int irq, void *data);
+irqreturn_t ring_msix(int irq, void *data);
+int nhi_probe(struct tb_nhi *nhi);
+void nhi_shutdown(struct tb_nhi *nhi);
+extern const struct dev_pm_ops nhi_pm_ops;
/**
* struct tb_nhi_ops - NHI specific optional operations
@@ -38,6 +46,12 @@ enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi);
* @runtime_suspend: NHI specific runtime_suspend hook
* @runtime_resume: NHI specific runtime_resume hook
* @shutdown: NHI specific shutdown
+ * @pre_nvm_auth: hook to run before Thunderbolt 3 NVM authentication
+ * @post_nvm_auth: hook to run after Thunderbolt 3 NVM authentication
+ * @request_ring_irq: NHI specific interrupt retrieval hook
+ * @release_ring_irq: NHI specific interrupt release hook
+ * @is_present: Whether the device is currently present on the parent bus
+ * @init_interrupts: NHI specific interrupt initialization hook
*/
struct tb_nhi_ops {
int (*init)(struct tb_nhi *nhi);
@@ -46,6 +60,12 @@ struct tb_nhi_ops {
int (*runtime_suspend)(struct tb_nhi *nhi);
int (*runtime_resume)(struct tb_nhi *nhi);
void (*shutdown)(struct tb_nhi *nhi);
+ void (*pre_nvm_auth)(struct tb_nhi *nhi);
+ void (*post_nvm_auth)(struct tb_nhi *nhi);
+ int (*request_ring_irq)(struct tb_ring *ring, bool no_suspend);
+ void (*release_ring_irq)(struct tb_ring *ring);
+ bool (*is_present)(struct tb_nhi *nhi);
+ int (*init_interrupts)(struct tb_nhi *nhi);
};
extern const struct tb_nhi_ops icl_nhi_ops;
@@ -100,4 +120,15 @@ extern const struct tb_nhi_ops icl_nhi_ops;
#define PCI_CLASS_SERIAL_USB_USB4 0x0c0340
+/* Host interface quirks */
+#define QUIRK_AUTO_CLEAR_INT BIT(0)
+#define QUIRK_E2E BIT(1)
+
+/*
+ * Minimal number of vectors when we use MSI-X. Two for control channel
+ * Rx/Tx and the rest four are for cross domain DMA paths.
+ */
+#define MSIX_MIN_VECS 6
+#define MSIX_MAX_VECS 16
+
#endif
diff --git a/drivers/thunderbolt/nhi_ops.c b/drivers/thunderbolt/nhi_ops.c
index 8c50066f3411..530337a78322 100644
--- a/drivers/thunderbolt/nhi_ops.c
+++ b/drivers/thunderbolt/nhi_ops.c
@@ -11,6 +11,7 @@
#include "nhi.h"
#include "nhi_regs.h"
+#include "pci.h"
#include "tb.h"
/* Ice Lake specific NHI operations */
@@ -176,6 +177,8 @@ static int icl_nhi_resume(struct tb_nhi *nhi)
static void icl_nhi_shutdown(struct tb_nhi *nhi)
{
+ nhi_pci_shutdown(nhi);
+
icl_nhi_force_power(nhi, false);
}
@@ -186,4 +189,10 @@ const struct tb_nhi_ops icl_nhi_ops = {
.runtime_suspend = icl_nhi_suspend,
.runtime_resume = icl_nhi_resume,
.shutdown = icl_nhi_shutdown,
+ .pre_nvm_auth = nhi_pci_start_dma_port,
+ .post_nvm_auth = nhi_pci_complete_dma_port,
+ .request_ring_irq = nhi_pci_ring_request_msix,
+ .release_ring_irq = nhi_pci_ring_release_msix,
+ .is_present = nhi_pci_is_present,
+ .init_interrupts = nhi_pci_init_msi,
};
diff --git a/drivers/thunderbolt/pci.c b/drivers/thunderbolt/pci.c
new file mode 100644
index 000000000000..0aaa9df0538d
--- /dev/null
+++ b/drivers/thunderbolt/pci.c
@@ -0,0 +1,439 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Thunderbolt driver - PCI NHI driver
+ *
+ * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com>
+ * Copyright (C) 2018, Intel Corporation
+ */
+
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/iommu.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/property.h>
+#include <linux/string_helpers.h>
+
+#include "nhi.h"
+#include "nhi_regs.h"
+#include "pci.h"
+#include "tb.h"
+
+/**
+ * struct tb_nhi_pci - NHI device connected over PCIe
+ * @nhi: NHI device
+ * @msix_ida: Used to allocate MSI-X vectors for rings
+ */
+struct tb_nhi_pci {
+ struct tb_nhi nhi;
+ struct ida msix_ida;
+};
+
+static inline struct tb_nhi_pci *nhi_to_pci(struct tb_nhi *nhi)
+{
+ return container_of(nhi, struct tb_nhi_pci, nhi);
+}
+
+static void nhi_pci_check_quirks(struct tb_nhi_pci *nhi_pci)
+{
+ struct tb_nhi *nhi = &nhi_pci->nhi;
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
+ /*
+ * Intel hardware supports auto clear of the interrupt
+ * status register right after interrupt is being
+ * issued.
+ */
+ nhi->quirks |= QUIRK_AUTO_CLEAR_INT;
+
+ switch (pdev->device) {
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI:
+ case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI:
+ /*
+ * Falcon Ridge controller needs the end-to-end
+ * flow control workaround to avoid losing Rx
+ * packets when RING_FLAG_E2E is set.
+ */
+ nhi->quirks |= QUIRK_E2E;
+ break;
+ }
+ }
+}
+
+static int nhi_pci_check_iommu_pdev(struct pci_dev *pdev, void *data)
+{
+ if (!pdev->external_facing ||
+ !device_iommu_capable(&pdev->dev, IOMMU_CAP_PRE_BOOT_PROTECTION))
+ return 0;
+ *(bool *)data = true;
+ return 1; /* Stop walking */
+}
+
+static void nhi_pci_check_iommu(struct tb_nhi_pci *nhi_pci)
+{
+ struct tb_nhi *nhi = &nhi_pci->nhi;
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+ struct pci_bus *bus = pdev->bus;
+ bool port_ok = false;
+
+ /*
+ * Ideally what we'd do here is grab every PCI device that
+ * represents a tunnelling adapter for this NHI and check their
+ * status directly, but unfortunately USB4 seems to make it
+ * obnoxiously difficult to reliably make any correlation.
+ *
+ * So for now we'll have to bodge it... Hoping that the system
+ * is at least sane enough that an adapter is in the same PCI
+ * segment as its NHI, if we can find *something* on that segment
+ * which meets the requirements for Kernel DMA Protection, we'll
+ * take that to imply that firmware is aware and has (hopefully)
+ * done the right thing in general. We need to know that the PCI
+ * layer has seen the ExternalFacingPort property which will then
+ * inform the IOMMU layer to enforce the complete "untrusted DMA"
+ * flow, but also that the IOMMU driver itself can be trusted not
+ * to have been subverted by a pre-boot DMA attack.
+ */
+ while (bus->parent)
+ bus = bus->parent;
+
+ pci_walk_bus(bus, nhi_pci_check_iommu_pdev, &port_ok);
+
+ nhi->iommu_dma_protection = port_ok;
+ dev_dbg(nhi->dev, "IOMMU DMA protection is %s\n",
+ str_enabled_disabled(port_ok));
+}
+
+int nhi_pci_init_msi(struct tb_nhi *nhi)
+{
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+ struct device *dev = &pdev->dev;
+ int res, irq, nvec;
+
+ ida_init(&nhi_pci->msix_ida);
+
+ /*
+ * The NHI has 16 MSI-X vectors or a single MSI. We first try to
+ * get all MSI-X vectors and if we succeed, each ring will have
+ * one MSI-X. If for some reason that does not work out, we
+ * fallback to a single MSI.
+ */
+ nvec = pci_alloc_irq_vectors(pdev, MSIX_MIN_VECS, MSIX_MAX_VECS,
+ PCI_IRQ_MSIX);
+ if (nvec < 0) {
+ nvec = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI);
+ if (nvec < 0)
+ return nvec;
+
+ INIT_WORK(&nhi->interrupt_work, nhi_interrupt_work);
+
+ irq = pci_irq_vector(pdev, 0);
+ if (irq < 0)
+ return irq;
+
+ res = devm_request_irq(&pdev->dev, irq, nhi_msi,
+ IRQF_NO_SUSPEND, "thunderbolt", nhi);
+ if (res)
+ return dev_err_probe(dev, res, "request_irq failed, aborting\n");
+ }
+
+ return 0;
+}
+
+static bool nhi_pci_imr_valid(struct pci_dev *pdev)
+{
+ u8 val;
+
+ if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val))
+ return !!val;
+
+ return true;
+}
+
+void nhi_pci_start_dma_port(struct tb_nhi *nhi)
+{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+ struct pci_dev *root_port;
+
+ /*
+ * During host router NVM upgrade we should not allow root port to
+ * go into D3cold because some root ports cannot trigger PME
+ * itself. To be on the safe side keep the root port in D0 during
+ * the whole upgrade process.
+ */
+ root_port = pcie_find_root_port(pdev);
+ if (root_port)
+ pm_runtime_get_noresume(&root_port->dev);
+}
+
+void nhi_pci_complete_dma_port(struct tb_nhi *nhi)
+{
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+ struct pci_dev *root_port;
+
+ root_port = pcie_find_root_port(pdev);
+ if (root_port)
+ pm_runtime_put(&root_port->dev);
+}
+
+int nhi_pci_ring_request_msix(struct tb_ring *ring, bool no_suspend)
+{
+ struct tb_nhi *nhi = ring->nhi;
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+ unsigned long irqflags;
+ int ret;
+
+ if (!pdev->msix_enabled)
+ return 0;
+
+ ret = ida_alloc_max(&nhi_pci->msix_ida, MSIX_MAX_VECS - 1, GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+
+ ring->vector = ret;
+
+ ret = pci_irq_vector(pdev, ring->vector);
+ if (ret < 0)
+ goto err_ida_remove;
+
+ ring->irq = ret;
+
+ irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
+ ret = request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
+ if (ret)
+ goto err_ida_remove;
+
+ return 0;
+
+err_ida_remove:
+ ida_free(&nhi_pci->msix_ida, ring->vector);
+
+ return ret;
+}
+
+void nhi_pci_ring_release_msix(struct tb_ring *ring)
+{
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(ring->nhi);
+
+ if (ring->irq <= 0)
+ return;
+
+ free_irq(ring->irq, ring);
+ ida_free(&nhi_pci->msix_ida, ring->vector);
+ ring->vector = 0;
+ ring->irq = 0;
+}
+
+void nhi_pci_shutdown(struct tb_nhi *nhi)
+{
+ struct tb_nhi_pci *nhi_pci = nhi_to_pci(nhi);
+ struct pci_dev *pdev = to_pci_dev(nhi->dev);
+
+ /*
+ * We have to release the irq before calling flush_work. Otherwise an
+ * already executing IRQ handler could call schedule_work again.
+ */
+ if (!pdev->msix_enabled) {
+ devm_free_irq(nhi->dev, pdev->irq, nhi);
+ flush_work(&nhi->interrupt_work);
+ }
+ ida_destroy(&nhi_pci->msix_ida);
+}
+
+bool nhi_pci_is_present(struct tb_nhi *nhi)
+{
+ return pci_device_is_present(to_pci_dev(nhi->dev));
+}
+
+static const struct tb_nhi_ops pci_nhi_default_ops = {
+ .pre_nvm_auth = nhi_pci_start_dma_port,
+ .post_nvm_auth = nhi_pci_complete_dma_port,
+ .request_ring_irq = nhi_pci_ring_request_msix,
+ .release_ring_irq = nhi_pci_ring_release_msix,
+ .shutdown = nhi_pci_shutdown,
+ .is_present = nhi_pci_is_present,
+ .init_interrupts = nhi_pci_init_msi,
+};
+
+static int nhi_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ struct device *dev = &pdev->dev;
+ struct tb_nhi_pci *nhi_pci;
+ struct tb_nhi *nhi;
+ int res;
+
+ if (!nhi_pci_imr_valid(pdev))
+ return dev_err_probe(dev, -ENODEV, "firmware image not valid, aborting\n");
+
+ res = pcim_enable_device(pdev);
+ if (res)
+ return dev_err_probe(dev, res, "cannot enable PCI device, aborting\n");
+
+ nhi_pci = devm_kzalloc(dev, sizeof(*nhi_pci), GFP_KERNEL);
+ if (!nhi_pci)
+ return -ENOMEM;
+
+ nhi = &nhi_pci->nhi;
+ nhi->dev = dev;
+ nhi->ops = (const struct tb_nhi_ops *)id->driver_data ?: &pci_nhi_default_ops;
+
+ nhi->iobase = pcim_iomap_region(pdev, 0, "thunderbolt");
+ res = PTR_ERR_OR_ZERO(nhi->iobase);
+ if (res)
+ return dev_err_probe(dev, res, "cannot obtain PCI resources, aborting\n");
+
+ nhi_pci_check_quirks(nhi_pci);
+ nhi_pci_check_iommu(nhi_pci);
+
+ pci_set_master(pdev);
+
+ return nhi_probe(&nhi_pci->nhi);
+}
+
+static void nhi_pci_remove(struct pci_dev *pdev)
+{
+ struct tb *tb = pci_get_drvdata(pdev);
+ struct tb_nhi *nhi = tb->nhi;
+
+ pm_runtime_get_sync(&pdev->dev);
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_forbid(&pdev->dev);
+
+ tb_domain_remove(tb);
+ wait_for_completion(&nhi->domain_released);
+ nhi_shutdown(nhi);
+}
+
+static struct pci_device_id nhi_ids[] = {
+ /*
+ * We have to specify class, the TB bridges use the same device and
+ * vendor (sub)id on gen 1 and gen 2 controllers.
+ */
+ {
+ .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_LIGHT_RIDGE,
+ .subvendor = 0x2222, .subdevice = 0x1111,
+ },
+ {
+ .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C,
+ .subvendor = 0x2222, .subdevice = 0x1111,
+ },
+ {
+ .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI,
+ .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
+ },
+ {
+ .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0,
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI,
+ .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID,
+ },
+
+ /* Thunderbolt 3 */
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_2C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_4C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_USBONLY_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_LP_USBONLY_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ /* Thunderbolt 4 */
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TGL_H_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_RPL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_M_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MTL_P_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_WCL_NHI0),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
+
+ /* Any USB4 compliant host */
+ { PCI_DEVICE_CLASS(PCI_CLASS_SERIAL_USB_USB4, ~0) },
+
+ { 0,}
+};
+
+MODULE_DEVICE_TABLE(pci, nhi_ids);
+MODULE_DESCRIPTION("Thunderbolt/USB4 core driver");
+MODULE_LICENSE("GPL");
+
+static struct pci_driver nhi_driver = {
+ .name = "thunderbolt",
+ .id_table = nhi_ids,
+ .probe = nhi_pci_probe,
+ .remove = nhi_pci_remove,
+ .shutdown = nhi_pci_remove,
+ .driver.pm = &nhi_pm_ops,
+};
+
+static int __init nhi_init(void)
+{
+ int ret;
+
+ ret = tb_domain_init();
+ if (ret)
+ return ret;
+
+ ret = pci_register_driver(&nhi_driver);
+ if (ret)
+ tb_domain_exit();
+
+ return ret;
+}
+
+static void __exit nhi_unload(void)
+{
+ pci_unregister_driver(&nhi_driver);
+ tb_domain_exit();
+}
+
+rootfs_initcall(nhi_init);
+module_exit(nhi_unload);
diff --git a/drivers/thunderbolt/pci.h b/drivers/thunderbolt/pci.h
new file mode 100644
index 000000000000..dcee402f9d20
--- /dev/null
+++ b/drivers/thunderbolt/pci.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
+ */
+
+#ifndef __TB_PCI_H
+#define __TB_PCI_H
+
+#include <linux/types.h>
+
+void nhi_pci_start_dma_port(struct tb_nhi *nhi);
+void nhi_pci_complete_dma_port(struct tb_nhi *nhi);
+int nhi_pci_ring_request_msix(struct tb_ring *ring, bool no_suspend);
+void nhi_pci_ring_release_msix(struct tb_ring *ring);
+bool nhi_pci_is_present(struct tb_nhi *nhi);
+void nhi_pci_shutdown(struct tb_nhi *nhi);
+int nhi_pci_init_msi(struct tb_nhi *nhi);
+
+#endif
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index 1817bb7afd33..d24b3a086b5c 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -209,32 +209,6 @@ static int nvm_authenticate_device_dma_port(struct tb_switch *sw)
return -ETIMEDOUT;
}
-static void nvm_authenticate_start_dma_port(struct tb_switch *sw)
-{
- struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
- struct pci_dev *root_port;
-
- /*
- * During host router NVM upgrade we should not allow root port to
- * go into D3cold because some root ports cannot trigger PME
- * itself. To be on the safe side keep the root port in D0 during
- * the whole upgrade process.
- */
- root_port = pcie_find_root_port(pdev);
- if (root_port)
- pm_runtime_get_noresume(&root_port->dev);
-}
-
-static void nvm_authenticate_complete_dma_port(struct tb_switch *sw)
-{
- struct pci_dev *pdev = to_pci_dev(sw->tb->nhi->dev);
- struct pci_dev *root_port;
-
- root_port = pcie_find_root_port(pdev);
- if (root_port)
- pm_runtime_put(&root_port->dev);
-}
-
static inline bool nvm_readable(struct tb_switch *sw)
{
if (tb_switch_is_usb4(sw)) {
@@ -260,6 +234,7 @@ static inline bool nvm_upgradeable(struct tb_switch *sw)
static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
{
+ struct tb_nhi *nhi = sw->tb->nhi;
int ret;
if (tb_switch_is_usb4(sw)) {
@@ -276,7 +251,8 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
sw->nvm->authenticating = true;
if (!tb_route(sw)) {
- nvm_authenticate_start_dma_port(sw);
+ if (nhi->ops && nhi->ops->pre_nvm_auth)
+ nhi->ops->pre_nvm_auth(nhi);
ret = nvm_authenticate_host_dma_port(sw);
} else {
ret = nvm_authenticate_device_dma_port(sw);
@@ -2745,6 +2721,7 @@ static int tb_switch_set_uuid(struct tb_switch *sw)
static int tb_switch_add_dma_port(struct tb_switch *sw)
{
+ struct tb_nhi *nhi = sw->tb->nhi;
u32 status;
int ret;
@@ -2804,8 +2781,10 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
*/
nvm_get_auth_status(sw, &status);
if (status) {
- if (!tb_route(sw))
- nvm_authenticate_complete_dma_port(sw);
+ if (!tb_route(sw)) {
+ if (nhi->ops && nhi->ops->post_nvm_auth)
+ nhi->ops->post_nvm_auth(nhi);
+ }
return 0;
}
@@ -2819,8 +2798,10 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
return ret;
/* Now we can allow root port to suspend again */
- if (!tb_route(sw))
- nvm_authenticate_complete_dma_port(sw);
+ if (!tb_route(sw)) {
+ if (nhi->ops && nhi->ops->post_nvm_auth)
+ nhi->ops->post_nvm_auth(nhi);
+ }
if (status) {
tb_sw_info(sw, "switch flash authentication failed\n");
--
2.54.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid
2026-05-13 16:23 [PATCH v3 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
@ 2026-05-13 16:23 ` Konrad Dybcio
2026-05-15 6:30 ` Mika Westerberg
2026-05-13 16:23 ` [PATCH v3 4/4] thunderbolt: Add some more descriptive probe error messages Konrad Dybcio
3 siblings, 1 reply; 9+ messages in thread
From: Konrad Dybcio @ 2026-05-13 16:23 UTC (permalink / raw)
To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
Konrad Dybcio
From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Because of how fundamental ops->init_interrupts() is, it no longer
makes sense to consider cases where nhi->ops is NULL.
Drop some boilerplate around it and add a single sanity-check in
nhi_probe() instead.
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
drivers/thunderbolt/nhi.c | 32 ++++++++++++++++++--------------
drivers/thunderbolt/switch.c | 6 +++---
2 files changed, 21 insertions(+), 17 deletions(-)
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 740c10ee852b..2a8d1b3716c0 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -559,7 +559,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
if (!ring->descriptors)
goto err_free_ring;
- if (nhi->ops && nhi->ops->request_ring_irq) {
+ if (nhi->ops->request_ring_irq) {
if (nhi->ops->request_ring_irq(ring, flags & RING_FLAG_NO_SUSPEND))
goto err_free_descs;
}
@@ -570,7 +570,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
return ring;
err_release_msix:
- if (nhi->ops && nhi->ops->release_ring_irq)
+ if (nhi->ops->release_ring_irq)
nhi->ops->release_ring_irq(ring);
err_free_descs:
dma_free_coherent(ring->nhi->dev,
@@ -779,7 +779,7 @@ void tb_ring_free(struct tb_ring *ring)
}
spin_unlock_irq(&ring->nhi->lock);
- if (nhi->ops && nhi->ops->release_ring_irq)
+ if (nhi->ops->release_ring_irq)
nhi->ops->release_ring_irq(ring);
dma_free_coherent(ring->nhi->dev,
@@ -928,7 +928,7 @@ static int __nhi_suspend_noirq(struct device *dev, bool wakeup)
if (ret)
return ret;
- if (nhi->ops && nhi->ops->suspend_noirq) {
+ if (nhi->ops->suspend_noirq) {
ret = nhi->ops->suspend_noirq(tb->nhi, wakeup);
if (ret)
return ret;
@@ -1008,7 +1008,7 @@ static int nhi_resume_noirq(struct device *dev)
if ((nhi->ops->is_present && !nhi->ops->is_present(nhi))) {
nhi->going_away = true;
} else {
- if (nhi->ops && nhi->ops->resume_noirq) {
+ if (nhi->ops->resume_noirq) {
ret = nhi->ops->resume_noirq(nhi);
if (ret)
return ret;
@@ -1051,7 +1051,7 @@ static int nhi_runtime_suspend(struct device *dev)
if (ret)
return ret;
- if (nhi->ops && nhi->ops->runtime_suspend) {
+ if (nhi->ops->runtime_suspend) {
ret = nhi->ops->runtime_suspend(tb->nhi);
if (ret)
return ret;
@@ -1065,7 +1065,7 @@ static int nhi_runtime_resume(struct device *dev)
struct tb_nhi *nhi = tb->nhi;
int ret;
- if (nhi->ops && nhi->ops->runtime_resume) {
+ if (nhi->ops->runtime_resume) {
ret = nhi->ops->runtime_resume(nhi);
if (ret)
return ret;
@@ -1091,7 +1091,7 @@ void nhi_shutdown(struct tb_nhi *nhi)
}
nhi_disable_interrupts(nhi);
- if (nhi->ops && nhi->ops->shutdown)
+ if (nhi->ops->shutdown)
nhi->ops->shutdown(nhi);
}
@@ -1155,6 +1155,12 @@ int nhi_probe(struct tb_nhi *nhi)
struct tb *tb;
int res;
+ if (!nhi->ops)
+ return dev_err_probe(dev, -EINVAL, "NHI ops not set\n");
+
+ if (!nhi->ops->init_interrupts)
+ return dev_err_probe(dev, -EINVAL, "missing required NHI ops\n");
+
nhi->hop_count = ioread32(nhi->iobase + REG_CAPS) & 0x3ff;
dev_dbg(dev, "total paths: %d\n", nhi->hop_count);
@@ -1171,11 +1177,9 @@ int nhi_probe(struct tb_nhi *nhi)
nhi_disable_interrupts(nhi);
nhi_enable_int_throttling(nhi);
- if (nhi->ops && nhi->ops->init_interrupts) {
- res = nhi->ops->init_interrupts(nhi);
- if (res)
- return dev_err_probe(dev, res, "cannot enable interrupts, aborting\n");
- }
+ res = nhi->ops->init_interrupts(nhi);
+ if (res)
+ return dev_err_probe(dev, res, "cannot enable interrupts, aborting\n");
spin_lock_init(&nhi->lock);
@@ -1183,7 +1187,7 @@ int nhi_probe(struct tb_nhi *nhi)
if (res)
return dev_err_probe(dev, res, "failed to set DMA mask\n");
- if (nhi->ops && nhi->ops->init) {
+ if (nhi->ops->init) {
res = nhi->ops->init(nhi);
if (res)
return res;
diff --git a/drivers/thunderbolt/switch.c b/drivers/thunderbolt/switch.c
index d24b3a086b5c..9cf37efe699a 100644
--- a/drivers/thunderbolt/switch.c
+++ b/drivers/thunderbolt/switch.c
@@ -251,7 +251,7 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
sw->nvm->authenticating = true;
if (!tb_route(sw)) {
- if (nhi->ops && nhi->ops->pre_nvm_auth)
+ if (nhi->ops->pre_nvm_auth)
nhi->ops->pre_nvm_auth(nhi);
ret = nvm_authenticate_host_dma_port(sw);
} else {
@@ -2782,7 +2782,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
nvm_get_auth_status(sw, &status);
if (status) {
if (!tb_route(sw)) {
- if (nhi->ops && nhi->ops->post_nvm_auth)
+ if (nhi->ops->post_nvm_auth)
nhi->ops->post_nvm_auth(nhi);
}
return 0;
@@ -2799,7 +2799,7 @@ static int tb_switch_add_dma_port(struct tb_switch *sw)
/* Now we can allow root port to suspend again */
if (!tb_route(sw)) {
- if (nhi->ops && nhi->ops->post_nvm_auth)
+ if (nhi->ops->post_nvm_auth)
nhi->ops->post_nvm_auth(nhi);
}
--
2.54.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 4/4] thunderbolt: Add some more descriptive probe error messages
2026-05-13 16:23 [PATCH v3 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
` (2 preceding siblings ...)
2026-05-13 16:23 ` [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
@ 2026-05-13 16:23 ` Konrad Dybcio
3 siblings, 0 replies; 9+ messages in thread
From: Konrad Dybcio @ 2026-05-13 16:23 UTC (permalink / raw)
To: Andreas Noever, Mika Westerberg, Yehezkel Bernat
Cc: linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu,
Konrad Dybcio
From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
Currently there's a lot of silent error-return paths in various places
where nhi_probe() can fail. Sprinkle some prints to make it clearer
where the problem is.
Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
---
drivers/thunderbolt/nhi.c | 4 ++--
drivers/thunderbolt/tb.c | 7 ++++---
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
index 2a8d1b3716c0..2491e08bbd24 100644
--- a/drivers/thunderbolt/nhi.c
+++ b/drivers/thunderbolt/nhi.c
@@ -1190,7 +1190,7 @@ int nhi_probe(struct tb_nhi *nhi)
if (nhi->ops->init) {
res = nhi->ops->init(nhi);
if (res)
- return res;
+ return dev_err_probe(dev, res, "NHI specific init failed\n");
}
tb = nhi_select_cm(nhi);
@@ -1211,7 +1211,7 @@ int nhi_probe(struct tb_nhi *nhi)
tb_domain_put(tb);
wait_for_completion(&nhi->domain_released);
nhi_shutdown(nhi);
- return res;
+ return dev_err_probe(dev, res, "failed to add domain\n");
}
dev_set_drvdata(dev, tb);
diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
index e09259b35d40..4fd052e6552e 100644
--- a/drivers/thunderbolt/tb.c
+++ b/drivers/thunderbolt/tb.c
@@ -2997,7 +2997,8 @@ static int tb_start(struct tb *tb, bool reset)
tb->root_switch = tb_switch_alloc(tb, &tb->dev, 0);
if (IS_ERR(tb->root_switch))
- return PTR_ERR(tb->root_switch);
+ return dev_err_probe(tb->nhi->dev, PTR_ERR(tb->root_switch),
+ "failed to allocate host router\n");
/*
* ICM firmware upgrade needs running firmware and in native
@@ -3014,14 +3015,14 @@ static int tb_start(struct tb *tb, bool reset)
ret = tb_switch_configure(tb->root_switch);
if (ret) {
tb_switch_put(tb->root_switch);
- return ret;
+ return dev_err_probe(tb->nhi->dev, ret, "failed to configure host router\n");
}
/* Announce the switch to the world */
ret = tb_switch_add(tb->root_switch);
if (ret) {
tb_switch_put(tb->root_switch);
- return ret;
+ return dev_err_probe(tb->nhi->dev, ret, "failed to add host router\n");
}
/*
--
2.54.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 2/4] thunderbolt: Separate out common NHI bits
2026-05-13 16:23 ` [PATCH v3 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
@ 2026-05-15 6:28 ` Mika Westerberg
0 siblings, 0 replies; 9+ messages in thread
From: Mika Westerberg @ 2026-05-15 6:28 UTC (permalink / raw)
To: Konrad Dybcio
Cc: Andreas Noever, Mika Westerberg, Yehezkel Bernat, linux-kernel,
linux-usb, usb4-upstream, Raghavendra Thoorpu, Konrad Dybcio
Hi,
On Wed, May 13, 2026 at 06:23:33PM +0200, Konrad Dybcio wrote:
> .runtime_suspend = icl_nhi_suspend,
> .runtime_resume = icl_nhi_resume,
> .shutdown = icl_nhi_shutdown,
> + .pre_nvm_auth = nhi_pci_start_dma_port,
> + .post_nvm_auth = nhi_pci_complete_dma_port,
> + .request_ring_irq = nhi_pci_ring_request_msix,
> + .release_ring_irq = nhi_pci_ring_release_msix,
> + .is_present = nhi_pci_is_present,
> + .init_interrupts = nhi_pci_init_msi,
I think we can actually move all this ICL (Intel integrated prior Nova
Lake) stuff into pci.c now. This is only PCI and that way we do not need to
expose any these functions outside of pci.c.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid
2026-05-13 16:23 ` [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
@ 2026-05-15 6:30 ` Mika Westerberg
2026-05-15 9:34 ` Konrad Dybcio
0 siblings, 1 reply; 9+ messages in thread
From: Mika Westerberg @ 2026-05-15 6:30 UTC (permalink / raw)
To: Konrad Dybcio
Cc: Andreas Noever, Mika Westerberg, Yehezkel Bernat, linux-kernel,
linux-usb, usb4-upstream, Raghavendra Thoorpu, Konrad Dybcio
On Wed, May 13, 2026 at 06:23:34PM +0200, Konrad Dybcio wrote:
> From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
>
> Because of how fundamental ops->init_interrupts() is, it no longer
> makes sense to consider cases where nhi->ops is NULL.
>
> Drop some boilerplate around it and add a single sanity-check in
> nhi_probe() instead.
>
> Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
> ---
> drivers/thunderbolt/nhi.c | 32 ++++++++++++++++++--------------
> drivers/thunderbolt/switch.c | 6 +++---
> 2 files changed, 21 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
> index 740c10ee852b..2a8d1b3716c0 100644
> --- a/drivers/thunderbolt/nhi.c
> +++ b/drivers/thunderbolt/nhi.c
> @@ -559,7 +559,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
> if (!ring->descriptors)
> goto err_free_ring;
>
> - if (nhi->ops && nhi->ops->request_ring_irq) {
> + if (nhi->ops->request_ring_irq) {
I wonder if it makes this more readable if we wrap these like:
if (nhi_request_ring_irq(nhi)) {
> if (nhi->ops->request_ring_irq(ring, flags & RING_FLAG_NO_SUSPEND))
> goto err_free_descs;
> }
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid
2026-05-15 6:30 ` Mika Westerberg
@ 2026-05-15 9:34 ` Konrad Dybcio
2026-05-15 10:05 ` Mika Westerberg
0 siblings, 1 reply; 9+ messages in thread
From: Konrad Dybcio @ 2026-05-15 9:34 UTC (permalink / raw)
To: Mika Westerberg, Konrad Dybcio
Cc: Andreas Noever, Mika Westerberg, Yehezkel Bernat, linux-kernel,
linux-usb, usb4-upstream, Raghavendra Thoorpu
On 5/15/26 8:30 AM, Mika Westerberg wrote:
> On Wed, May 13, 2026 at 06:23:34PM +0200, Konrad Dybcio wrote:
>> From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
>>
>> Because of how fundamental ops->init_interrupts() is, it no longer
>> makes sense to consider cases where nhi->ops is NULL.
>>
>> Drop some boilerplate around it and add a single sanity-check in
>> nhi_probe() instead.
>>
>> Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
>> ---
>> drivers/thunderbolt/nhi.c | 32 ++++++++++++++++++--------------
>> drivers/thunderbolt/switch.c | 6 +++---
>> 2 files changed, 21 insertions(+), 17 deletions(-)
>>
>> diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
>> index 740c10ee852b..2a8d1b3716c0 100644
>> --- a/drivers/thunderbolt/nhi.c
>> +++ b/drivers/thunderbolt/nhi.c
>> @@ -559,7 +559,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
>> if (!ring->descriptors)
>> goto err_free_ring;
>>
>> - if (nhi->ops && nhi->ops->request_ring_irq) {
>> + if (nhi->ops->request_ring_irq) {
>
> I wonder if it makes this more readable if we wrap these like:
>
> if (nhi_request_ring_irq(nhi)) {
The UFS subsystem does that, and it results in a ton of boilerplate,
i.e. for each op you need to define something like a:
static inline T nhi_foo_bar(struct tb_nhi *nhi, ...)
{
if (nhi->ops->foo_bar)
return nhi->ops->foo_bar(...);
return 0;
}
I can do that, but I don't see real value here
Konrad
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid
2026-05-15 9:34 ` Konrad Dybcio
@ 2026-05-15 10:05 ` Mika Westerberg
0 siblings, 0 replies; 9+ messages in thread
From: Mika Westerberg @ 2026-05-15 10:05 UTC (permalink / raw)
To: Konrad Dybcio
Cc: Konrad Dybcio, Andreas Noever, Mika Westerberg, Yehezkel Bernat,
linux-kernel, linux-usb, usb4-upstream, Raghavendra Thoorpu
On Fri, May 15, 2026 at 11:34:41AM +0200, Konrad Dybcio wrote:
> On 5/15/26 8:30 AM, Mika Westerberg wrote:
> > On Wed, May 13, 2026 at 06:23:34PM +0200, Konrad Dybcio wrote:
> >> From: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
> >>
> >> Because of how fundamental ops->init_interrupts() is, it no longer
> >> makes sense to consider cases where nhi->ops is NULL.
> >>
> >> Drop some boilerplate around it and add a single sanity-check in
> >> nhi_probe() instead.
> >>
> >> Signed-off-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com>
> >> ---
> >> drivers/thunderbolt/nhi.c | 32 ++++++++++++++++++--------------
> >> drivers/thunderbolt/switch.c | 6 +++---
> >> 2 files changed, 21 insertions(+), 17 deletions(-)
> >>
> >> diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
> >> index 740c10ee852b..2a8d1b3716c0 100644
> >> --- a/drivers/thunderbolt/nhi.c
> >> +++ b/drivers/thunderbolt/nhi.c
> >> @@ -559,7 +559,7 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
> >> if (!ring->descriptors)
> >> goto err_free_ring;
> >>
> >> - if (nhi->ops && nhi->ops->request_ring_irq) {
> >> + if (nhi->ops->request_ring_irq) {
> >
> > I wonder if it makes this more readable if we wrap these like:
> >
> > if (nhi_request_ring_irq(nhi)) {
>
> The UFS subsystem does that, and it results in a ton of boilerplate,
> i.e. for each op you need to define something like a:
>
> static inline T nhi_foo_bar(struct tb_nhi *nhi, ...)
> {
> if (nhi->ops->foo_bar)
> return nhi->ops->foo_bar(...);
>
> return 0;
> }
>
> I can do that, but I don't see real value here
Yeah, I was thinking if that would make the calls more readable but I guess
we can do this later if at all.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2026-05-15 10:05 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-13 16:23 [PATCH v3 0/4] Prepwork for non-PCIe NHI/TBT hosts Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 1/4] thunderbolt: Move pci_device out of tb_nhi Konrad Dybcio
2026-05-13 16:23 ` [PATCH v3 2/4] thunderbolt: Separate out common NHI bits Konrad Dybcio
2026-05-15 6:28 ` Mika Westerberg
2026-05-13 16:23 ` [PATCH v3 3/4] thunderbolt: Require nhi->ops be valid Konrad Dybcio
2026-05-15 6:30 ` Mika Westerberg
2026-05-15 9:34 ` Konrad Dybcio
2026-05-15 10:05 ` Mika Westerberg
2026-05-13 16:23 ` [PATCH v3 4/4] thunderbolt: Add some more descriptive probe error messages Konrad Dybcio
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox