From: Jani Nikula <jani.nikula@linux.intel.com>
To: Ankit Nautiyal <ankit.k.nautiyal@intel.com>,
intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: imre.deak@intel.com, Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Subject: Re: [PATCH 04/14] drm/i915/dp: Rework pipe joiner logic in mode_valid
Date: Wed, 21 Jan 2026 12:54:08 +0200 [thread overview]
Message-ID: <746b08e5d92c4c734011868711da53eda8883a12@intel.com> (raw)
In-Reply-To: <20260121035330.2793386-5-ankit.k.nautiyal@intel.com>
On Wed, 21 Jan 2026, Ankit Nautiyal <ankit.k.nautiyal@intel.com> wrote:
> Currently in intel_dp_mode_valid(), we compute the number of joined pipes
> required before deciding whether DSC is needed. This ordering prevents us
> from accounting for DSC-related overhead when determining pipe
> requirements.
>
> Refactor the logic to start with a single pipe and incrementally try
> additional pipes only if needed. While DSC overhead is not yet computed
> here, this restructuring prepares the code to support that in a follow-up
> changes.
>
> Additionally, if a forced joiner configuration is present, we first check
> whether it satisfies the bandwidth and timing constraints. If it does not,
> we fall back to evaluating configurations with 1, 2, or 4 pipes joined
> and prune or keep the mode accordingly.
>
> Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_dp.c | 144 +++++++++++++++---------
> drivers/gpu/drm/i915/display/intel_dp.h | 7 ++
> 2 files changed, 96 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index fc7d48460a52..02381f84fa58 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -107,6 +107,13 @@
> /* Constants for DP DSC configurations */
> static const u8 valid_dsc_bpp[] = {6, 8, 10, 12, 15};
>
> +static const enum joiner_type joiner_candidates[] = {
> + FORCED_JOINER,
> + NO_JOINER,
> + BIG_JOINER,
> + ULTRA_JOINER,
> +};
> +
> /**
> * intel_dp_is_edp - is the given port attached to an eDP panel (either CPU or PCH)
> * @intel_dp: DP struct
> @@ -1445,13 +1452,13 @@ intel_dp_mode_valid(struct drm_connector *_connector,
> const struct drm_display_mode *fixed_mode;
> int target_clock = mode->clock;
> int max_rate, mode_rate, max_lanes, max_link_clock;
> - int max_dotclk = display->cdclk.max_dotclk_freq;
> u16 dsc_max_compressed_bpp = 0;
> u8 dsc_slice_count = 0;
> enum drm_mode_status status;
> bool dsc = false;
> int num_joined_pipes;
> int link_bpp_x16;
> + int i;
>
> status = intel_cpu_transcoder_mode_valid(display, mode);
> if (status != MODE_OK)
> @@ -1488,67 +1495,94 @@ intel_dp_mode_valid(struct drm_connector *_connector,
> target_clock, mode->hdisplay,
> link_bpp_x16, 0);
>
> - num_joined_pipes = intel_dp_num_joined_pipes(intel_dp, connector,
> - mode->hdisplay, target_clock);
> - max_dotclk *= num_joined_pipes;
> -
> - if (target_clock > max_dotclk)
> - return MODE_CLOCK_HIGH;
> -
> - status = intel_pfit_mode_valid(display, mode, output_format, num_joined_pipes);
> - if (status != MODE_OK)
> - return status;
> -
> - if (intel_dp_has_dsc(connector)) {
> - int pipe_bpp;
> -
> - /*
> - * TBD pass the connector BPC,
> - * for now U8_MAX so that max BPC on that platform would be picked
> - */
> - pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
> -
> - /*
> - * Output bpp is stored in 6.4 format so right shift by 4 to get the
> - * integer value since we support only integer values of bpp.
> - */
> - if (intel_dp_is_edp(intel_dp)) {
> - dsc_max_compressed_bpp =
> - drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
> -
> - dsc_slice_count =
> - intel_dp_dsc_get_slice_count(connector,
> - target_clock,
> - mode->hdisplay,
> - num_joined_pipes);
> -
> - dsc = dsc_max_compressed_bpp && dsc_slice_count;
> - } else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
> - unsigned long bw_overhead_flags = 0;
> -
> - if (!drm_dp_is_uhbr_rate(max_link_clock))
> - bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
> -
> - dsc = intel_dp_mode_valid_with_dsc(connector,
> - max_link_clock, max_lanes,
> - target_clock, mode->hdisplay,
> - num_joined_pipes,
> - output_format, pipe_bpp,
> - bw_overhead_flags);
> + for (i = 0; i < ARRAY_SIZE(joiner_candidates); i++) {
> + int max_dotclk = display->cdclk.max_dotclk_freq;
> + enum joiner_type joiner = joiner_candidates[i];
> +
> + status = MODE_CLOCK_HIGH;
> +
> + if (joiner == FORCED_JOINER) {
> + if (!connector->force_joined_pipes)
> + continue;
> + num_joined_pipes = connector->force_joined_pipes;
> + } else {
> + num_joined_pipes = 1 << joiner;
> + }
> +
> + if ((joiner >= NO_JOINER && !intel_dp_has_joiner(intel_dp)) ||
> + (joiner == BIG_JOINER && !HAS_BIGJOINER(display)) ||
> + (joiner == ULTRA_JOINER && !HAS_ULTRAJOINER(display)))
There's a bunch of superfluous braces in there.
Anyway, this makes me think if we should reconsider the HAS_BIGJOINER()
and HAS_ULTRAJOINER() naming, and the enum thing here.
We're adding a bunch of logic to enumerate combos, and to check those
against joiner availability. But really, we could have a HAS_JOINER(num)
which says whether we can join that many pipes. Maybe even have the
compression as parameter.
I know the spec talks about big/ultra joiner, but for the more casual
reader of the code, you really want to know how many pipes you're
talking about joining, not the *name* of the thing.
If we had that in place, we could turn the whole joiner loop here from
enumerating the enums to enumerating the number of joined pipes. and
whether we can join them. No need for the enum at all, and I think the
code might end up cleaner too.
> + break;
> +
> + if (mode->hdisplay > num_joined_pipes * intel_dp_hdisplay_limit(display))
> + continue;
> +
> + status = intel_pfit_mode_valid(display, mode, output_format, num_joined_pipes);
> + if (status != MODE_OK)
> + continue;
> +
> + if (intel_dp_has_dsc(connector)) {
> + int pipe_bpp;
> +
> + /*
> + * TBD pass the connector BPC,
> + * for now U8_MAX so that max BPC on that platform would be picked
> + */
> + pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
> +
> + /*
> + * Output bpp is stored in 6.4 format so right shift by 4 to get the
> + * integer value since we support only integer values of bpp.
> + */
> + if (intel_dp_is_edp(intel_dp)) {
> + dsc_max_compressed_bpp =
> + drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
> +
> + dsc_slice_count =
> + intel_dp_dsc_get_slice_count(connector,
> + target_clock,
> + mode->hdisplay,
> + num_joined_pipes);
> +
> + dsc = dsc_max_compressed_bpp && dsc_slice_count;
> + } else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
> + unsigned long bw_overhead_flags = 0;
> +
> + if (!drm_dp_is_uhbr_rate(max_link_clock))
> + bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
> +
> + dsc = intel_dp_mode_valid_with_dsc(connector,
> + max_link_clock, max_lanes,
> + target_clock, mode->hdisplay,
> + num_joined_pipes,
> + output_format, pipe_bpp,
> + bw_overhead_flags);
> + }
> + }
> +
> + if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
> + continue;
> +
> + if (mode_rate > max_rate && !dsc)
> + continue;
> +
> + status = intel_mode_valid_max_plane_size(display, mode, num_joined_pipes);
> + if (status != MODE_OK)
> + continue;
> +
> + max_dotclk *= num_joined_pipes;
> +
> + if (target_clock <= max_dotclk) {
> + status = MODE_OK;
> + break;
> }
> }
>
> - if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
> - return MODE_CLOCK_HIGH;
> -
> - status = intel_mode_valid_max_plane_size(display, mode, num_joined_pipes);
> if (status != MODE_OK)
> return status;
>
> - if (mode_rate > max_rate && !dsc)
> - return MODE_CLOCK_HIGH;
> -
> return intel_dp_mode_valid_downstream(connector, mode, target_clock);
> +
> }
>
> bool intel_dp_source_supports_tps3(struct intel_display *display)
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
> index 25bfbfd291b0..a27e3b5829bd 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -24,6 +24,13 @@ struct intel_display;
> struct intel_dp;
> struct intel_encoder;
>
> +enum joiner_type {
I wrote the below *before* I wrote the comment above, which makes all of
this moot, but here goes anyway:
"joiner_type" is too generic, it needs a prefix.
> + FORCED_JOINER = -1,
> + NO_JOINER = 0, /* 1 pipe */
> + BIG_JOINER = 1, /* 2 pipes */
> + ULTRA_JOINER = 2, /* 4 pipes */
Ditto for the enumerations. Why are you explicitly initializing the
values, are they somehow meaningful?
> +};
The enum is about the joiner, not about DP. intel_dp.h doesn't have
anything that requires the joiner type.
> +
> struct link_config_limits {
> int min_rate, max_rate;
> int min_lane_count, max_lane_count;
--
Jani Nikula, Intel
next prev parent reply other threads:[~2026-01-21 10:54 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-21 3:53 [PATCH 00/14] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 01/14] drm/i915/dp: Early reject bad hdisplay in intel_dp_mode_valid Ankit Nautiyal
2026-01-23 18:06 ` Imre Deak
2026-01-21 3:53 ` [PATCH 02/14] drm/i915/dp: Move num_joined_pipes and related checks together Ankit Nautiyal
2026-01-23 18:25 ` Imre Deak
2026-01-21 3:53 ` [PATCH 03/14] drm/i915/dp: Extract helper to get the hdisplay limit Ankit Nautiyal
2026-01-23 18:28 ` Imre Deak
2026-01-21 3:53 ` [PATCH 04/14] drm/i915/dp: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-21 10:54 ` Jani Nikula [this message]
2026-01-21 11:19 ` Nautiyal, Ankit K
2026-01-23 18:44 ` Imre Deak
2026-01-21 3:53 ` [PATCH 05/14] drm/i915/dp: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-23 19:06 ` Imre Deak
2026-01-21 3:53 ` [PATCH 06/14] drm/i915/dp_mst: Move the check for dotclock at the end Ankit Nautiyal
2026-01-23 19:11 ` Imre Deak
2026-01-21 3:53 ` [PATCH 07/14] drm/i915/dp_mst: Move the joiner dependent code together Ankit Nautiyal
2026-01-23 19:16 ` Imre Deak
2026-01-21 3:53 ` [PATCH 08/14] drm/i915/dp_mst: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 09/14] drm/i915/dp_mst: Extract helper to compute link for given joiner config Ankit Nautiyal
2026-01-23 19:23 ` Imre Deak
2026-01-21 3:53 ` [PATCH 10/14] drm/i915/dp_mst: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 11/14] drm/i915/dp: Introduce helper to check pixel rate against dotclock limits Ankit Nautiyal
2026-01-23 19:30 ` Imre Deak
2026-01-21 3:53 ` [PATCH 12/14] drm/i915/dp: Refactor dsc_slice_count handling in intel_dp_mode_valid() Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 13/14] drm/i915/dp: Account for DSC slice overhead Ankit Nautiyal
2026-01-21 3:53 ` [PATCH 14/14] drm/i915/display: Add upper limit check for pixel clock Ankit Nautiyal
2026-01-23 19:48 ` Imre Deak
2026-01-21 5:59 ` ✗ i915.CI.BAT: failure for Account for DSC bubble overhead for horizontal slices (rev3) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=746b08e5d92c4c734011868711da53eda8883a12@intel.com \
--to=jani.nikula@linux.intel.com \
--cc=ankit.k.nautiyal@intel.com \
--cc=imre.deak@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox