From: Imre Deak <imre.deak@intel.com>
To: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Cc: <intel-gfx@lists.freedesktop.org>,
<intel-xe@lists.freedesktop.org>, <jani.nikula@linux.intel.com>
Subject: Re: [PATCH 05/16] drm/i915/dp: Rework pipe joiner logic in compute_config
Date: Thu, 29 Jan 2026 21:35:15 +0200 [thread overview]
Message-ID: <aXu2c7_CNClkzvR-@ideak-desk.lan> (raw)
In-Reply-To: <20260129171154.3898077-6-ankit.k.nautiyal@intel.com>
On Thu, Jan 29, 2026 at 10:41:43PM +0530, Ankit Nautiyal wrote:
> Currently, the number of joined pipes are determined early in the flow,
> which limits flexibility for accounting DSC slice overhead. To address
> this, recompute the joined pipe count during DSC configuration.
>
> Refactor intel_dp_dsc_compute_config() to iterate over joiner candidates
> and select the minimal joiner configuration that satisfies the mode
> requirements. This prepares the logic for future changes that will
> consider DSC slice overhead.
>
> v2:
> - Rename helper to intel_dp_compute_link_for_joined_pipes(). (Imre)
> - Move the check for max dotclock inside the helper so that if dotclock
> check fails for non DSC case for a given number of joined pipes, we
> are able to fallback to the DSC mode. (Imre)
> v3:
> - Drop fallback to other joiner configurations, if the force joiner
> configuration fails. (Imre)
> - Check for maxdotclock limit for non-DSC case first and fall back to
> DSC if the check fails. (Imre)
> - Initialize ret to -EINVAL to handle case where we bail out early.
> (Imre)
>
> Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_dp.c | 93 ++++++++++++++++++++-----
> 1 file changed, 76 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 957a65795df0..a355900a31d9 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2802,33 +2802,24 @@ bool intel_dp_joiner_needs_dsc(struct intel_display *display,
> }
>
> static int
> -intel_dp_compute_link_config(struct intel_encoder *encoder,
> - struct intel_crtc_state *pipe_config,
> - struct drm_connector_state *conn_state,
> - bool respect_downstream_limits)
> +intel_dp_compute_link_for_joined_pipes(struct intel_encoder *encoder,
> + struct intel_crtc_state *pipe_config,
> + struct drm_connector_state *conn_state,
> + bool respect_downstream_limits)
> {
> struct intel_display *display = to_intel_display(encoder);
> - struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
> + int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
> struct intel_connector *connector =
> to_intel_connector(conn_state->connector);
> const struct drm_display_mode *adjusted_mode =
> &pipe_config->hw.adjusted_mode;
> struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> + int max_dotclk = display->cdclk.max_dotclk_freq;
> struct link_config_limits limits;
> bool dsc_needed, joiner_needs_dsc;
> - int num_joined_pipes;
> int ret = 0;
>
> - if (pipe_config->fec_enable &&
> - !intel_dp_supports_fec(intel_dp, connector, pipe_config))
> - return -EINVAL;
> -
> - num_joined_pipes = intel_dp_num_joined_pipes(intel_dp, connector,
> - adjusted_mode->crtc_hdisplay,
> - adjusted_mode->crtc_clock);
> - if (num_joined_pipes > 1)
> - pipe_config->joiner_pipes = GENMASK(crtc->pipe + num_joined_pipes - 1, crtc->pipe);
> -
> + max_dotclk *= num_joined_pipes;
> joiner_needs_dsc = intel_dp_joiner_needs_dsc(display, num_joined_pipes);
>
> dsc_needed = joiner_needs_dsc || intel_dp->force_dsc_en ||
> @@ -2851,7 +2842,8 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
> fxp_q4_from_int(pipe_config->pipe_bpp),
> fxp_q4_from_int(pipe_config->pipe_bpp),
> 0, false);
> - if (ret)
> +
> + if (ret || adjusted_mode->crtc_clock > max_dotclk)
> dsc_needed = true;
> }
>
> @@ -2876,6 +2868,9 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
> conn_state, &limits, 64);
> if (ret < 0)
> return ret;
> +
> + if (adjusted_mode->crtc_clock > max_dotclk)
> + return -EINVAL;
> }
>
> drm_dbg_kms(display->drm,
> @@ -2891,6 +2886,70 @@ intel_dp_compute_link_config(struct intel_encoder *encoder,
> return 0;
> }
>
> +static int
> +intel_dp_compute_link_config(struct intel_encoder *encoder,
> + struct intel_crtc_state *crtc_state,
> + struct drm_connector_state *conn_state,
> + bool respect_downstream_limits)
> +{
> + struct intel_display *display = to_intel_display(encoder);
> + struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
> + struct intel_connector *connector =
> + to_intel_connector(conn_state->connector);
> + const struct drm_display_mode *adjusted_mode =
> + &crtc_state->hw.adjusted_mode;
> + struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
> + int num_joined_pipes;
> + int num_pipes;
> + int ret = -EINVAL;
> +
> + if (crtc_state->fec_enable &&
> + !intel_dp_supports_fec(intel_dp, connector, crtc_state))
> + return -EINVAL;
> +
> + for (num_pipes = 1; num_pipes <= I915_MAX_PIPES; num_pipes++) {
> + if (connector->force_joined_pipes &&
> + num_pipes != connector->force_joined_pipes)
> + continue;
> +
> + num_joined_pipes = num_pipes;
It's enough to track this only in one variable, with that:
Reviewed-by: Imre Deak <imre.deak@intel.com>
> +
> + if (!intel_dp_can_join(display, num_joined_pipes))
> + continue;
> +
> + if (adjusted_mode->hdisplay >
> + num_joined_pipes * intel_dp_max_hdisplay_per_pipe(display))
> + continue;
> +
> + /*
> + * NOTE:
> + * The crtc_state->joiner_pipes should have been set at the end
> + * only if all the conditions are met. However that would mean
> + * that num_joined_pipes is passed around to all helpers and
> + * make them use it instead of using crtc_state->joiner_pipes
> + * directly or indirectly (via intel_crtc_num_joined_pipes()).
> + *
> + * For now, setting crtc_state->joiner_pipes to the candidate
> + * value to avoid the above churn and resetting it to 0, in case
> + * no joiner candidate is found to be suitable for the given
> + * configuration.
> + */
> + if (num_joined_pipes > 1)
> + crtc_state->joiner_pipes = GENMASK(crtc->pipe + num_joined_pipes - 1,
> + crtc->pipe);
> +
> + ret = intel_dp_compute_link_for_joined_pipes(encoder, crtc_state, conn_state,
> + respect_downstream_limits);
> + if (ret == 0)
> + break;
> + }
> +
> + if (ret < 0)
> + crtc_state->joiner_pipes = 0;
> +
> + return ret;
> +}
> +
> bool intel_dp_limited_color_range(const struct intel_crtc_state *crtc_state,
> const struct drm_connector_state *conn_state)
> {
> --
> 2.45.2
>
next prev parent reply other threads:[~2026-01-29 19:35 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-29 17:11 [PATCH 00/16] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 01/16] drm/i915/dp: Early reject bad hdisplay in intel_dp_mode_valid Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 02/16] drm/i915/dp: Move num_joined_pipes and related checks together Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 03/16] drm/i915/dp: Extract helper to get the hdisplay limit Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 04/16] drm/i915/dp: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-29 19:31 ` Imre Deak
2026-01-29 17:11 ` [PATCH 05/16] drm/i915/dp: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-29 19:35 ` Imre Deak [this message]
2026-01-29 17:11 ` [PATCH 06/16] drm/i915/dp_mst: Move the check for dotclock at the end Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 07/16] drm/i915/dp_mst: Move the joiner dependent code together Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 08/16] drm/i915/dp_mst: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-29 19:50 ` Imre Deak
2026-01-29 17:11 ` [PATCH 09/16] drm/i915/dp_mst: Extract helper to compute link for given joiner config Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 10/16] drm/i915/dp_mst: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-29 20:01 ` Imre Deak
2026-01-29 17:11 ` [PATCH 11/16] drm/i915/dp: Remove unused joiner helpers Ankit Nautiyal
2026-01-29 20:05 ` Imre Deak
2026-01-29 17:11 ` [PATCH 12/16] drm/i915/dp: Introduce helper to check pixel rate against dotclock limits Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 13/16] drm/i915/dp: Refactor dsc_slice_count handling in intel_dp_mode_valid() Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 14/16] drm/i915/dp: Account for DSC slice overhead Ankit Nautiyal
2026-01-29 20:20 ` Imre Deak
2026-01-29 17:11 ` [PATCH 15/16] drm/i915/display: Add upper limit check for pixel clock Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 16/16] drm/i915/display: Extend the max dotclock limit to WCL Ankit Nautiyal
2026-01-29 20:21 ` Imre Deak
2026-01-29 18:26 ` ✓ i915.CI.BAT: success for Account for DSC bubble overhead for horizontal slices (rev5) Patchwork
2026-01-30 4:17 ` ✗ i915.CI.Full: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2026-01-28 14:06 [PATCH 00/16] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 05/16] drm/i915/dp: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-28 17:03 ` Imre Deak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aXu2c7_CNClkzvR-@ideak-desk.lan \
--to=imre.deak@intel.com \
--cc=ankit.k.nautiyal@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox