From: Gustavo Sousa <gustavo.sousa@intel.com>
To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org
Cc: "Ankit Nautiyal" <ankit.k.nautiyal@intel.com>,
"Dnyaneshwar Bhadane" <dnyaneshwar.bhadane@intel.com>,
"Gustavo Sousa" <gustavo.sousa@intel.com>,
"Jouni Högander" <jouni.hogander@intel.com>,
"Juha-pekka Heikkila" <juha-pekka.heikkila@intel.com>,
"Luca Coelho" <luciano.coelho@intel.com>,
"Lucas De Marchi" <lucas.demarchi@intel.com>,
"Matt Atwood" <matthew.s.atwood@intel.com>,
"Matt Roper" <matthew.d.roper@intel.com>,
"Ravi Kumar Vodapalli" <ravi.kumar.vodapalli@intel.com>,
"Shekhar Chauhan" <shekhar.chauhan@intel.com>,
"Vinod Govindapillai" <vinod.govindapillai@intel.com>
Subject: [PATCH v4 07/11] drm/i915/xe3p_lpd: Extend Type-C flow for static DDI allocation
Date: Fri, 07 Nov 2025 21:05:40 -0300 [thread overview]
Message-ID: <20251107-xe3p_lpd-basic-enabling-v4-7-ab3367f65f15@intel.com> (raw)
In-Reply-To: <20251107-xe3p_lpd-basic-enabling-v4-0-ab3367f65f15@intel.com>
Xe3p_LPD has a new feature that allows the driver to allocate at runtime
the DDI (TC ones) port to drive a legacy connection on the Type-C
subsystem. This allows better resource utilization, because now there
is no need to statically reserve ports for legacy connectors on the
Type-C subsystem.
That said, our driver is not yet ready for the dynamic allocation.
Thus, as an incremental step, let's add the logic containing the
required programming sequence for the allocation, but, instead of
selecting the first available port, we try so use the 1:1 mapping
expected by the driver today.
Bspec: 68954
Co-developed-by: Dnyaneshwar Bhadane <dnyaneshwar.bhadane@intel.com>
Signed-off-by: Dnyaneshwar Bhadane <dnyaneshwar.bhadane@intel.com>
Signed-off-by: Gustavo Sousa <gustavo.sousa@intel.com>
---
NOTE: This patch is still a WIP. There are some opens to resolve here.
Nevertheless, I'm sending it here for early feedback.
For the HIP-index stuff, I have a local refactor started and need to
finish it up and send it.
The other open is about concurrent calls to iom_dp_resource_lock(). It
is likely that we need to have a software lock to prevent concurrent
access to IOM_DP_HW_RESOURCE_SEMAPHORE from our driver.
---
drivers/gpu/drm/i915/display/intel_display_regs.h | 20 ++-
drivers/gpu/drm/i915/display/intel_tc.c | 151 +++++++++++++++++++++-
2 files changed, 169 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_display_regs.h b/drivers/gpu/drm/i915/display/intel_display_regs.h
index 89ea0156ee06..0cf7d43ce210 100644
--- a/drivers/gpu/drm/i915/display/intel_display_regs.h
+++ b/drivers/gpu/drm/i915/display/intel_display_regs.h
@@ -2908,6 +2908,25 @@ enum skl_power_gate {
#define DP_PIN_ASSIGNMENT(idx, x) ((x) << ((idx) * 4))
/* See enum intel_tc_pin_assignment for the pin assignment field values. */
+/*
+ * FIXME: There is also a definition for this register in intel_dkl_phy_regs.h.
+ * We need to consolidate the definitions.
+ */
+#define HIP_INDEX_REG0 _MMIO(0x1010a0)
+#define HIP_168_INDEX_MASK REG_GENMASK(3, 0)
+#define HIP_168_IOM_RES_MGMT REG_FIELD_PREP(HIP_168_INDEX_MASK, 0x1)
+
+#define IOM_DP_HW_RESOURCE_SEMAPHORE _MMIO(0x168038)
+#define IOM_DP_HW_SEMLOCK REG_BIT(31)
+#define IOM_REQUESTOR_ID_MASK REG_GENMASK(3, 0)
+#define IOM_REQUESTOR_ID_DISPLAY_ENGINE REG_FIELD_PREP(IOM_REQUESTOR_ID_MASK, 0x4)
+
+#define IOM_DP_RESOURCE_MNG _MMIO(0x16802c)
+#define IOM_DDI_CONSUMER_SHIFT(tc_port) ((tc_port) * 4)
+#define IOM_DDI_CONSUMER_MASK(tc_port) (0xf << IOM_DDI_CONSUMER_SHIFT(tc_port))
+#define IOM_DDI_CONSUMER(tc_port, x) ((x) << IOM_DDI_CONSUMER_SHIFT(tc_port))
+#define IOM_DDI_CONSUMER_STATIC_TC(tc_port) IOM_DDI_CONSUMER(tc_port, 0x8 + (tc_port))
+
#define _TCSS_DDI_STATUS_1 0x161500
#define _TCSS_DDI_STATUS_2 0x161504
#define TCSS_DDI_STATUS(tc) _MMIO(_PICK_EVEN(tc, \
@@ -2946,5 +2965,4 @@ enum skl_power_gate {
#define MTL_TRDPRE_MASK REG_GENMASK(7, 0)
-
#endif /* __INTEL_DISPLAY_REGS_H__ */
diff --git a/drivers/gpu/drm/i915/display/intel_tc.c b/drivers/gpu/drm/i915/display/intel_tc.c
index 7e17ca018748..3c333999bbe4 100644
--- a/drivers/gpu/drm/i915/display/intel_tc.c
+++ b/drivers/gpu/drm/i915/display/intel_tc.c
@@ -9,6 +9,7 @@
#include "i915_reg.h"
#include "intel_atomic.h"
+#include "intel_bios.h"
#include "intel_cx0_phy_regs.h"
#include "intel_ddi.h"
#include "intel_de.h"
@@ -25,6 +26,9 @@
#include "intel_modeset_lock.h"
#include "intel_tc.h"
+#define IOM_DP_RES_SEMAPHORE_LOCK_TIMEOUT_US 10
+#define IOM_DP_RES_SEMAPHORE_RETRY_TIMEOUT_US 10000
+
enum tc_port_mode {
TC_PORT_DISCONNECTED,
TC_PORT_TBT_ALT,
@@ -1200,6 +1204,143 @@ static void xelpdp_tc_phy_get_hw_state(struct intel_tc_port *tc)
__tc_cold_unblock(tc, domain, tc_cold_wref);
}
+static void iom_res_mgmt_prepare_reg_access(struct intel_display *display)
+{
+ /*
+ * IOM resource management registers live in the 2nd 4KB page of IOM
+ * address space. So we need to configure HIP_INDEX_REG0 with the
+ * correct index.
+ *
+ * FIXME: We need to have this and dekel PHY implementation using a
+ * common abstraction to access registers on the HIP-indexed ranges, and
+ * this function would then be dropped.
+ */
+ intel_de_rmw(display, HIP_INDEX_REG0,
+ HIP_168_INDEX_MASK, HIP_168_IOM_RES_MGMT);
+}
+
+/*
+ * FIXME: This function also needs to avoid concurrent accesses from the driver
+ * itself, possibly via a software lock.
+ */
+static int iom_dp_resource_lock(struct intel_tc_port *tc)
+{
+ struct intel_display *display = to_intel_display(tc->dig_port);
+ u32 val = IOM_DP_HW_SEMLOCK | IOM_REQUESTOR_ID_DISPLAY_ENGINE;
+ int ret;
+
+ iom_res_mgmt_prepare_reg_access(display);
+ ret = poll_timeout_us(intel_de_write(display, IOM_DP_HW_RESOURCE_SEMAPHORE, val),
+ (intel_de_read(display, IOM_DP_HW_RESOURCE_SEMAPHORE) & val) == val,
+ IOM_DP_RES_SEMAPHORE_LOCK_TIMEOUT_US,
+ IOM_DP_RES_SEMAPHORE_RETRY_TIMEOUT_US, false);
+
+ if (ret)
+ drm_err(display->drm, "Port %s: timeout trying to lock IOM semaphore\n",
+ tc->port_name);
+
+ return ret;
+}
+
+static void iom_dp_resource_unlock(struct intel_tc_port *tc)
+{
+ struct intel_display *display = to_intel_display(tc->dig_port);
+
+ iom_res_mgmt_prepare_reg_access(display);
+ intel_de_write(display, IOM_DP_HW_RESOURCE_SEMAPHORE, IOM_REQUESTOR_ID_DISPLAY_ENGINE);
+}
+
+static bool xe3p_tc_iom_allocate_ddi(struct intel_tc_port *tc, bool allocate)
+{
+ struct intel_display *display = to_intel_display(tc->dig_port);
+ struct intel_digital_port *dig_port = tc->dig_port;
+ enum tc_port tc_port = intel_encoder_to_tc(&dig_port->base);
+ u32 val;
+ u32 consumer;
+ u32 expected_consumer;
+ bool ret;
+
+ if (DISPLAY_VER(display) < 35)
+ return true;
+
+ if (tc->mode != TC_PORT_LEGACY)
+ return true;
+
+ if (!intel_bios_encoder_supports_dyn_port_over_tc(dig_port->base.devdata))
+ return true;
+
+ if (iom_dp_resource_lock(tc))
+ return false;
+
+ val = intel_de_read(display, IOM_DP_RESOURCE_MNG);
+
+ consumer = val & IOM_DDI_CONSUMER_MASK(tc_port);
+ consumer >>= IOM_DDI_CONSUMER_SHIFT(tc_port);
+
+ /*
+ * Bspec instructs to select first available DDI, but our driver is not
+ * ready for such dynamic allocation yet. For now, we force a "static"
+ * allocation: map the physical port (where HPD happens) to the
+ * encoder's DDI (logical TC port, represented by tc_port).
+ */
+ expected_consumer = IOM_DDI_CONSUMER_STATIC_TC(tc_port);
+ expected_consumer >>= IOM_DDI_CONSUMER_SHIFT(tc_port);
+
+ if (allocate) {
+ struct intel_encoder *other_encoder;
+
+ /*
+ * Check if this encoder's DDI is already allocated for another
+ * physical port, which could have happened prior to the driver
+ * taking over (e.g. GOP).
+ */
+ for_each_intel_encoder(display->drm, other_encoder) {
+ enum tc_port other_tc_port = intel_encoder_to_tc(other_encoder);
+ u32 other_consumer;
+
+ if (tc_port == TC_PORT_NONE || other_tc_port == tc_port)
+ continue;
+
+ other_consumer = val & IOM_DDI_CONSUMER_MASK(other_tc_port);
+ other_consumer >>= IOM_DDI_CONSUMER_SHIFT(other_tc_port);
+ if (other_consumer == expected_consumer) {
+ drm_err(display->drm, "Port %s: expected consumer %u already allocated another DDI; IOM_DP_RESOURCE_MNG=0x%08x\n",
+ tc->port_name, expected_consumer, val);
+ ret = false;
+ goto out_resource_unlock;
+ }
+ }
+
+ if (consumer == 0) {
+ /* DDI is free to use, let's allocate it. */
+ val &= ~IOM_DDI_CONSUMER_MASK(tc_port);
+ val |= IOM_DDI_CONSUMER(tc_port, expected_consumer);
+ intel_de_write(display, IOM_DP_RESOURCE_MNG, val);
+ ret = true;
+ } else if (consumer == expected_consumer) {
+ /*
+ * Nothing to do, as the expected "static" DDI allocation is
+ * already in place.
+ */
+ ret = true;
+ } else {
+ drm_err(display->drm, "Port %s: DDI already allocated for consumer %u; IOM_DP_RESOURCE_MNG=0x%08x\n",
+ tc->port_name, consumer, val);
+ ret = false;
+ }
+ } else {
+ drm_WARN_ON(display->drm, consumer != expected_consumer);
+ val &= ~IOM_DDI_CONSUMER_MASK(tc_port);
+ intel_de_write(display, IOM_DP_RESOURCE_MNG, val);
+ ret = true;
+ }
+
+out_resource_unlock:
+ iom_dp_resource_unlock(tc);
+
+ return ret;
+}
+
static bool xelpdp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
{
tc->lock_wakeref = tc_cold_block(tc);
@@ -1210,9 +1351,12 @@ static bool xelpdp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
return true;
}
- if (!xelpdp_tc_phy_enable_tcss_power(tc, true))
+ if (!xe3p_tc_iom_allocate_ddi(tc, true))
goto out_unblock_tccold;
+ if (!xelpdp_tc_phy_enable_tcss_power(tc, true))
+ goto out_deallocate_ddi;
+
xelpdp_tc_phy_take_ownership(tc, true);
read_pin_configuration(tc);
@@ -1226,6 +1370,9 @@ static bool xelpdp_tc_phy_connect(struct intel_tc_port *tc, int required_lanes)
xelpdp_tc_phy_take_ownership(tc, false);
xelpdp_tc_phy_wait_for_tcss_power(tc, false);
+out_deallocate_ddi:
+ xe3p_tc_iom_allocate_ddi(tc, false);
+
out_unblock_tccold:
tc_cold_unblock(tc, fetch_and_zero(&tc->lock_wakeref));
@@ -1236,6 +1383,8 @@ static void xelpdp_tc_phy_disconnect(struct intel_tc_port *tc)
{
switch (tc->mode) {
case TC_PORT_LEGACY:
+ xe3p_tc_iom_allocate_ddi(tc, false);
+ fallthrough;
case TC_PORT_DP_ALT:
xelpdp_tc_phy_take_ownership(tc, false);
xelpdp_tc_phy_enable_tcss_power(tc, false);
--
2.51.0
next prev parent reply other threads:[~2025-11-08 0:06 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-08 0:05 [PATCH v4 00/11] drm/i915/display: Add initial support for Xe3p_LPD Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 01/11] drm/i915/wm: Do not make latency values monotonic on Xe3 onward Gustavo Sousa
2025-11-12 3:46 ` Kandpal, Suraj
2025-11-12 12:45 ` Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 02/11] drm/i915/vbt: Add fields dedicated_external and dyn_port_over_tc Gustavo Sousa
2025-11-11 16:02 ` Imre Deak
2025-11-11 16:15 ` Imre Deak
2025-11-12 13:00 ` Gustavo Sousa
2025-11-12 13:20 ` Imre Deak
2025-11-13 22:01 ` Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 03/11] drm/i915/power: Use intel_encoder_is_tc() Gustavo Sousa
2025-11-12 16:19 ` Imre Deak
2025-11-13 22:01 ` Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 04/11] drm/i915/display: Handle dedicated external ports in intel_encoder_is_tc() Gustavo Sousa
2025-11-12 16:24 ` Imre Deak
2025-11-13 22:02 ` Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 05/11] drm/i915/fbc: Add intel_fbc_id_for_pipe() Gustavo Sousa
2025-11-10 16:35 ` Matt Roper
2025-11-10 17:03 ` Ville Syrjälä
2025-11-10 22:18 ` Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 06/11] drm/i915/xe3p_lpd: Handle underrun debug bits Gustavo Sousa
2025-11-10 11:45 ` Jani Nikula
2025-11-11 0:23 ` Gustavo Sousa
2025-11-11 10:22 ` Jani Nikula
2025-11-11 12:22 ` Gustavo Sousa
2025-11-10 17:03 ` Ville Syrjälä
2025-11-10 23:42 ` Gustavo Sousa
2025-11-08 0:05 ` Gustavo Sousa [this message]
2025-11-12 17:53 ` [PATCH v4 07/11] drm/i915/xe3p_lpd: Extend Type-C flow for static DDI allocation Imre Deak
2025-11-14 19:46 ` Gustavo Sousa
2025-11-15 0:40 ` Imre Deak
2025-11-15 1:22 ` Imre Deak
2025-11-17 15:02 ` Gustavo Sousa
2025-11-17 15:17 ` Imre Deak
2025-11-17 17:23 ` Gustavo Sousa
2025-11-17 17:58 ` Imre Deak
2025-11-17 15:33 ` Gustavo Sousa
2025-11-17 16:01 ` Imre Deak
2025-11-08 0:05 ` [PATCH v4 08/11] drm/i915/nvls: Add NVL-S display support Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 09/11] drm/i915/display: Use platform check in HAS_LT_PHY() Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 10/11] drm/i915/display: Move HAS_LT_PHY() to intel_display_device.h Gustavo Sousa
2025-11-08 0:05 ` [PATCH v4 11/11] drm/i915/display: Use HAS_LT_PHY() for LT PHY AUX power Gustavo Sousa
2025-11-08 0:15 ` [PATCH v4 00/11] drm/i915/display: Add initial support for Xe3p_LPD Gustavo Sousa
2025-11-08 1:52 ` ✗ CI.checkpatch: warning for drm/i915/display: Add initial support for Xe3p_LPD (rev4) Patchwork
2025-11-08 1:54 ` ✓ CI.KUnit: success " Patchwork
2025-11-08 2:09 ` ✗ CI.checksparse: warning " Patchwork
2025-11-08 2:31 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-11-09 8:10 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251107-xe3p_lpd-basic-enabling-v4-7-ab3367f65f15@intel.com \
--to=gustavo.sousa@intel.com \
--cc=ankit.k.nautiyal@intel.com \
--cc=dnyaneshwar.bhadane@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jouni.hogander@intel.com \
--cc=juha-pekka.heikkila@intel.com \
--cc=lucas.demarchi@intel.com \
--cc=luciano.coelho@intel.com \
--cc=matthew.d.roper@intel.com \
--cc=matthew.s.atwood@intel.com \
--cc=ravi.kumar.vodapalli@intel.com \
--cc=shekhar.chauhan@intel.com \
--cc=vinod.govindapillai@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).