From: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
To: intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: imre.deak@intel.com, Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Subject: [PATCH 04/14] drm/i915/dp: Rework pipe joiner logic in mode_valid
Date: Wed, 21 Jan 2026 09:23:20 +0530 [thread overview]
Message-ID: <20260121035330.2793386-5-ankit.k.nautiyal@intel.com> (raw)
In-Reply-To: <20260121035330.2793386-1-ankit.k.nautiyal@intel.com>
Currently in intel_dp_mode_valid(), we compute the number of joined pipes
required before deciding whether DSC is needed. This ordering prevents us
from accounting for DSC-related overhead when determining pipe
requirements.
Refactor the logic to start with a single pipe and incrementally try
additional pipes only if needed. While DSC overhead is not yet computed
here, this restructuring prepares the code to support that in a follow-up
changes.
Additionally, if a forced joiner configuration is present, we first check
whether it satisfies the bandwidth and timing constraints. If it does not,
we fall back to evaluating configurations with 1, 2, or 4 pipes joined
and prune or keep the mode accordingly.
Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
---
drivers/gpu/drm/i915/display/intel_dp.c | 144 +++++++++++++++---------
drivers/gpu/drm/i915/display/intel_dp.h | 7 ++
2 files changed, 96 insertions(+), 55 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index fc7d48460a52..02381f84fa58 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -107,6 +107,13 @@
/* Constants for DP DSC configurations */
static const u8 valid_dsc_bpp[] = {6, 8, 10, 12, 15};
+static const enum joiner_type joiner_candidates[] = {
+ FORCED_JOINER,
+ NO_JOINER,
+ BIG_JOINER,
+ ULTRA_JOINER,
+};
+
/**
* intel_dp_is_edp - is the given port attached to an eDP panel (either CPU or PCH)
* @intel_dp: DP struct
@@ -1445,13 +1452,13 @@ intel_dp_mode_valid(struct drm_connector *_connector,
const struct drm_display_mode *fixed_mode;
int target_clock = mode->clock;
int max_rate, mode_rate, max_lanes, max_link_clock;
- int max_dotclk = display->cdclk.max_dotclk_freq;
u16 dsc_max_compressed_bpp = 0;
u8 dsc_slice_count = 0;
enum drm_mode_status status;
bool dsc = false;
int num_joined_pipes;
int link_bpp_x16;
+ int i;
status = intel_cpu_transcoder_mode_valid(display, mode);
if (status != MODE_OK)
@@ -1488,67 +1495,94 @@ intel_dp_mode_valid(struct drm_connector *_connector,
target_clock, mode->hdisplay,
link_bpp_x16, 0);
- num_joined_pipes = intel_dp_num_joined_pipes(intel_dp, connector,
- mode->hdisplay, target_clock);
- max_dotclk *= num_joined_pipes;
-
- if (target_clock > max_dotclk)
- return MODE_CLOCK_HIGH;
-
- status = intel_pfit_mode_valid(display, mode, output_format, num_joined_pipes);
- if (status != MODE_OK)
- return status;
-
- if (intel_dp_has_dsc(connector)) {
- int pipe_bpp;
-
- /*
- * TBD pass the connector BPC,
- * for now U8_MAX so that max BPC on that platform would be picked
- */
- pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
-
- /*
- * Output bpp is stored in 6.4 format so right shift by 4 to get the
- * integer value since we support only integer values of bpp.
- */
- if (intel_dp_is_edp(intel_dp)) {
- dsc_max_compressed_bpp =
- drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
-
- dsc_slice_count =
- intel_dp_dsc_get_slice_count(connector,
- target_clock,
- mode->hdisplay,
- num_joined_pipes);
-
- dsc = dsc_max_compressed_bpp && dsc_slice_count;
- } else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
- unsigned long bw_overhead_flags = 0;
-
- if (!drm_dp_is_uhbr_rate(max_link_clock))
- bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
-
- dsc = intel_dp_mode_valid_with_dsc(connector,
- max_link_clock, max_lanes,
- target_clock, mode->hdisplay,
- num_joined_pipes,
- output_format, pipe_bpp,
- bw_overhead_flags);
+ for (i = 0; i < ARRAY_SIZE(joiner_candidates); i++) {
+ int max_dotclk = display->cdclk.max_dotclk_freq;
+ enum joiner_type joiner = joiner_candidates[i];
+
+ status = MODE_CLOCK_HIGH;
+
+ if (joiner == FORCED_JOINER) {
+ if (!connector->force_joined_pipes)
+ continue;
+ num_joined_pipes = connector->force_joined_pipes;
+ } else {
+ num_joined_pipes = 1 << joiner;
+ }
+
+ if ((joiner >= NO_JOINER && !intel_dp_has_joiner(intel_dp)) ||
+ (joiner == BIG_JOINER && !HAS_BIGJOINER(display)) ||
+ (joiner == ULTRA_JOINER && !HAS_ULTRAJOINER(display)))
+ break;
+
+ if (mode->hdisplay > num_joined_pipes * intel_dp_hdisplay_limit(display))
+ continue;
+
+ status = intel_pfit_mode_valid(display, mode, output_format, num_joined_pipes);
+ if (status != MODE_OK)
+ continue;
+
+ if (intel_dp_has_dsc(connector)) {
+ int pipe_bpp;
+
+ /*
+ * TBD pass the connector BPC,
+ * for now U8_MAX so that max BPC on that platform would be picked
+ */
+ pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
+
+ /*
+ * Output bpp is stored in 6.4 format so right shift by 4 to get the
+ * integer value since we support only integer values of bpp.
+ */
+ if (intel_dp_is_edp(intel_dp)) {
+ dsc_max_compressed_bpp =
+ drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
+
+ dsc_slice_count =
+ intel_dp_dsc_get_slice_count(connector,
+ target_clock,
+ mode->hdisplay,
+ num_joined_pipes);
+
+ dsc = dsc_max_compressed_bpp && dsc_slice_count;
+ } else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
+ unsigned long bw_overhead_flags = 0;
+
+ if (!drm_dp_is_uhbr_rate(max_link_clock))
+ bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
+
+ dsc = intel_dp_mode_valid_with_dsc(connector,
+ max_link_clock, max_lanes,
+ target_clock, mode->hdisplay,
+ num_joined_pipes,
+ output_format, pipe_bpp,
+ bw_overhead_flags);
+ }
+ }
+
+ if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
+ continue;
+
+ if (mode_rate > max_rate && !dsc)
+ continue;
+
+ status = intel_mode_valid_max_plane_size(display, mode, num_joined_pipes);
+ if (status != MODE_OK)
+ continue;
+
+ max_dotclk *= num_joined_pipes;
+
+ if (target_clock <= max_dotclk) {
+ status = MODE_OK;
+ break;
}
}
- if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
- return MODE_CLOCK_HIGH;
-
- status = intel_mode_valid_max_plane_size(display, mode, num_joined_pipes);
if (status != MODE_OK)
return status;
- if (mode_rate > max_rate && !dsc)
- return MODE_CLOCK_HIGH;
-
return intel_dp_mode_valid_downstream(connector, mode, target_clock);
+
}
bool intel_dp_source_supports_tps3(struct intel_display *display)
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 25bfbfd291b0..a27e3b5829bd 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -24,6 +24,13 @@ struct intel_display;
struct intel_dp;
struct intel_encoder;
+enum joiner_type {
+ FORCED_JOINER = -1,
+ NO_JOINER = 0, /* 1 pipe */
+ BIG_JOINER = 1, /* 2 pipes */
+ ULTRA_JOINER = 2, /* 4 pipes */
+};
+
struct link_config_limits {
int min_rate, max_rate;
int min_lane_count, max_lane_count;
--
2.45.2
next prev parent reply other threads:[~2026-01-21 4:09 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-21 3:53 [PATCH 00/14] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 01/14] drm/i915/dp: Early reject bad hdisplay in intel_dp_mode_valid Ankit Nautiyal
2026-01-23 18:06 ` Imre Deak
2026-01-21 3:53 ` [PATCH 02/14] drm/i915/dp: Move num_joined_pipes and related checks together Ankit Nautiyal
2026-01-23 18:25 ` Imre Deak
2026-01-21 3:53 ` [PATCH 03/14] drm/i915/dp: Extract helper to get the hdisplay limit Ankit Nautiyal
2026-01-23 18:28 ` Imre Deak
2026-01-21 3:53 ` Ankit Nautiyal [this message]
2026-01-21 10:54 ` [PATCH 04/14] drm/i915/dp: Rework pipe joiner logic in mode_valid Jani Nikula
2026-01-21 11:19 ` Nautiyal, Ankit K
2026-01-23 18:44 ` Imre Deak
2026-01-21 3:53 ` [PATCH 05/14] drm/i915/dp: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-23 19:06 ` Imre Deak
2026-01-21 3:53 ` [PATCH 06/14] drm/i915/dp_mst: Move the check for dotclock at the end Ankit Nautiyal
2026-01-23 19:11 ` Imre Deak
2026-01-21 3:53 ` [PATCH 07/14] drm/i915/dp_mst: Move the joiner dependent code together Ankit Nautiyal
2026-01-23 19:16 ` Imre Deak
2026-01-21 3:53 ` [PATCH 08/14] drm/i915/dp_mst: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 09/14] drm/i915/dp_mst: Extract helper to compute link for given joiner config Ankit Nautiyal
2026-01-23 19:23 ` Imre Deak
2026-01-21 3:53 ` [PATCH 10/14] drm/i915/dp_mst: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 11/14] drm/i915/dp: Introduce helper to check pixel rate against dotclock limits Ankit Nautiyal
2026-01-23 19:30 ` Imre Deak
2026-01-21 3:53 ` [PATCH 12/14] drm/i915/dp: Refactor dsc_slice_count handling in intel_dp_mode_valid() Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 13/14] drm/i915/dp: Account for DSC slice overhead Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 14/14] drm/i915/display: Add upper limit check for pixel clock Ankit Nautiyal
2026-01-23 19:48 ` Imre Deak
2026-01-21 4:16 ` ✓ CI.KUnit: success for Account for DSC bubble overhead for horizontal slices (rev3) Patchwork
2026-01-21 4:52 ` ✗ Xe.CI.BAT: failure " Patchwork
2026-01-21 6:36 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260121035330.2793386-5-ankit.k.nautiyal@intel.com \
--to=ankit.k.nautiyal@intel.com \
--cc=imre.deak@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox