Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Nautiyal, Ankit K" <ankit.k.nautiyal@intel.com>
To: <imre.deak@intel.com>
Cc: <intel-gfx@lists.freedesktop.org>,
	<intel-xe@lists.freedesktop.org>, <jani.nikula@linux.intel.com>
Subject: Re: [PATCH 04/17] drm/i915/dp: Rework pipe joiner logic in mode_valid
Date: Mon, 2 Feb 2026 14:59:17 +0530	[thread overview]
Message-ID: <0201449f-7f8e-44af-b7fb-e7890e8570e1@intel.com> (raw)
In-Reply-To: <aYBt5eSJrW-C9TxI@ideak-desk.lan>


On 2/2/2026 2:57 PM, Imre Deak wrote:
> On Mon, Feb 02, 2026 at 02:54:25PM +0530, Nautiyal, Ankit K wrote:
>> On 2/2/2026 2:10 PM, Imre Deak wrote:
>>> On Fri, Jan 30, 2026 at 01:47:59PM +0530, Ankit Nautiyal wrote:
>>>> Currently in intel_dp_mode_valid(), we compute the number of joined pipes
>>>> required before deciding whether DSC is needed. This ordering prevents us
>>>> from accounting for DSC-related overhead when determining pipe
>>>> requirements.
>>>>
>>>> It is not possible to first decide whether DSC is needed and then compute
>>>> the required number of joined pipes, because the two depend on each other:
>>>>
>>>>    - DSC need is a function of the pipe count (e.g., 4‑pipe always requires
>>>>      DSC; 2‑pipe may require it if uncompressed joiner is unavailable).
>>>>
>>>>    - Whether a given pipe‑join configuration is sufficient depends on
>>>>      effective bandwidth, which itself changes when DSC is used.
>>>>
>>>> As a result, the only correct approach is to iterate candidate pipe counts.
>>>>
>>>> So, refactor the logic to start with a single pipe and incrementally try
>>>> additional pipes only if needed. While DSC overhead is not yet computed
>>>> here, this restructuring prepares the code to support that in a follow-up
>>>> changes.
>>>>
>>>> If a forced joiner configuration is present, we just check for that
>>>> configuration. If it fails, we bailout and return instead of trying with
>>>> other joiner configurations.
>>>>
>>>> v2:
>>>>    - Iterate over number of pipes to be joined instead of joiner
>>>>      candidates. (Jani)
>>>>    - Document the rationale of iterating over number of joined pipes.
>>>>      (Imre)
>>>> v3:
>>>>    - In case the force joiner configuration doesn't work, do not fallback
>>>>      to the normal routine, bailout instead of trying other joiner
>>>>      configurations. (Imre)
>>>> v4:
>>>>    - Use num_joined_pipes instead of num_pipes. (Imre)
>>>>    - Inititialize status before the loops starts. (Imre)
>>>>
>>>> Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
>>>> Reviewed-by: Imre Deak <imre.deak@intel.com>
>>> There is still one issue, see below.
>>>
>>>> ---
>>>>    drivers/gpu/drm/i915/display/intel_dp.c | 135 ++++++++++++++++--------
>>>>    1 file changed, 89 insertions(+), 46 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
>>>> index 4c3a1b6d0015..dbe63efc1694 100644
>>>> --- a/drivers/gpu/drm/i915/display/intel_dp.c
>>>> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
>>>> @@ -1434,6 +1434,23 @@ bool intel_dp_has_dsc(const struct intel_connector *connector)
>>>>    	return true;
>>>>    }
>>>> +static
>>>> +bool intel_dp_can_join(struct intel_display *display,
>>>> +		       int num_joined_pipes)
>>>> +{
>>>> +	switch (num_joined_pipes) {
>>>> +	case 1:
>>>> +		return true;
>>>> +	case 2:
>>>> +		return HAS_BIGJOINER(display) ||
>>>> +		       HAS_UNCOMPRESSED_JOINER(display);
>>>> +	case 4:
>>>> +		return HAS_ULTRAJOINER(display);
>>>> +	default:
>>>> +		return false;
>>>> +	}
>>>> +}
>>>> +
>>>>    static enum drm_mode_status
>>>>    intel_dp_mode_valid(struct drm_connector *_connector,
>>>>    		    const struct drm_display_mode *mode)
>>>> @@ -1445,7 +1462,6 @@ intel_dp_mode_valid(struct drm_connector *_connector,
>>>>    	const struct drm_display_mode *fixed_mode;
>>>>    	int target_clock = mode->clock;
>>>>    	int max_rate, mode_rate, max_lanes, max_link_clock;
>>>> -	int max_dotclk = display->cdclk.max_dotclk_freq;
>>>>    	u16 dsc_max_compressed_bpp = 0;
>>>>    	u8 dsc_slice_count = 0;
>>>>    	enum drm_mode_status status;
>>>> @@ -1488,66 +1504,93 @@ intel_dp_mode_valid(struct drm_connector *_connector,
>>>>    					   target_clock, mode->hdisplay,
>>>>    					   link_bpp_x16, 0);
>>>> -	num_joined_pipes = intel_dp_num_joined_pipes(intel_dp, connector,
>>>> -						     mode->hdisplay, target_clock);
>>>> -	max_dotclk *= num_joined_pipes;
>>>> +	/*
>>>> +	 * We cannot determine the required pipe‑join count before knowing whether
>>>> +	 * DSC is needed, nor can we determine DSC need without knowing the pipe
>>>> +	 * count.
>>>> +	 * Because of this dependency cycle, the only correct approach is to iterate
>>>> +	 * over candidate pipe counts and evaluate each combination.
>>>> +	 */
>>>> +	status = MODE_CLOCK_HIGH;
>>>> +	for (num_joined_pipes = 1; num_joined_pipes <= I915_MAX_PIPES; num_joined_pipes++) {
>>>> +		int max_dotclk = display->cdclk.max_dotclk_freq;
>>>> -	if (target_clock > max_dotclk)
>>>> -		return MODE_CLOCK_HIGH;
>>>> +		if (connector->force_joined_pipes &&
>>>> +		    num_joined_pipes != connector->force_joined_pipes)
>>>> +			continue;
>>>> -	status = intel_pfit_mode_valid(display, mode, output_format, num_joined_pipes);
>>>> -	if (status != MODE_OK)
>>>> -		return status;
>>>> +		if (!intel_dp_can_join(display, num_joined_pipes))
>>>> +			continue;
>>>> -	if (intel_dp_has_dsc(connector)) {
>>>> -		int pipe_bpp;
>>>> +		if (mode->hdisplay > num_joined_pipes * intel_dp_max_hdisplay_per_pipe(display))
>>>> +			continue;
>>>> -		/*
>>>> -		 * TBD pass the connector BPC,
>>>> -		 * for now U8_MAX so that max BPC on that platform would be picked
>>>> -		 */
>>>> -		pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
>>>> +		status = intel_pfit_mode_valid(display, mode, output_format, num_joined_pipes);
>>>> +		if (status != MODE_OK)
>>>> +			continue;
>>> I missed it in my review of this particular patch, even though
>>> I did mention the similar issue elsewhere:
>>>
>>> status is guaranteed to be MODE_OK at this point and then ...
>>
>> Oh yes this was not a problem earlier as I was setting status =
>> MODE_CLOCK_HIGH inside the loop.
> It was a problem even then, if this continue happened in the last
> iteration.

Ah right (face palm).


>
>> Thanks for catching this, will fix this in this patch and the patch#8 and
>> re-send.
>>
>>
>> Regards,
>>
>> Ankit
>>
>>>> -		/*
>>>> -		 * Output bpp is stored in 6.4 format so right shift by 4 to get the
>>>> -		 * integer value since we support only integer values of bpp.
>>>> -		 */
>>>> -		if (intel_dp_is_edp(intel_dp)) {
>>>> -			dsc_max_compressed_bpp =
>>>> -				drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
>>>> +		if (intel_dp_has_dsc(connector)) {
>>>> +			int pipe_bpp;
>>>> -			dsc_slice_count =
>>>> -				intel_dp_dsc_get_slice_count(connector,
>>>> -							     target_clock,
>>>> -							     mode->hdisplay,
>>>> -							     num_joined_pipes);
>>>> +			/*
>>>> +			 * TBD pass the connector BPC,
>>>> +			 * for now U8_MAX so that max BPC on that platform would be picked
>>>> +			 */
>>>> +			pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
>>>> -			dsc = dsc_max_compressed_bpp && dsc_slice_count;
>>>> -		} else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
>>>> -			unsigned long bw_overhead_flags = 0;
>>>> +			/*
>>>> +			 * Output bpp is stored in 6.4 format so right shift by 4 to get the
>>>> +			 * integer value since we support only integer values of bpp.
>>>> +			 */
>>>> +			if (intel_dp_is_edp(intel_dp)) {
>>>> +				dsc_max_compressed_bpp =
>>>> +					drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
>>>> -			if (!drm_dp_is_uhbr_rate(max_link_clock))
>>>> -				bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
>>>> +				dsc_slice_count =
>>>> +					intel_dp_dsc_get_slice_count(connector,
>>>> +								     target_clock,
>>>> +								     mode->hdisplay,
>>>> +								     num_joined_pipes);
>>>> -			dsc = intel_dp_mode_valid_with_dsc(connector,
>>>> -							   max_link_clock, max_lanes,
>>>> -							   target_clock, mode->hdisplay,
>>>> -							   num_joined_pipes,
>>>> -							   output_format, pipe_bpp,
>>>> -							   bw_overhead_flags);
>>>> +				dsc = dsc_max_compressed_bpp && dsc_slice_count;
>>>> +			} else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
>>>> +				unsigned long bw_overhead_flags = 0;
>>>> +
>>>> +				if (!drm_dp_is_uhbr_rate(max_link_clock))
>>>> +					bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
>>>> +
>>>> +				dsc = intel_dp_mode_valid_with_dsc(connector,
>>>> +								   max_link_clock, max_lanes,
>>>> +								   target_clock, mode->hdisplay,
>>>> +								   num_joined_pipes,
>>>> +								   output_format, pipe_bpp,
>>>> +								   bw_overhead_flags);
>>>> +			}
>>>>    		}
>>>> +
>>>> +		if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
>>>> +			continue;
>>> ... this will continue with status == MODE_OK and the loop can terminate
>>> like that. So need a status = MODE_CLOCK_HIGH before continue.
>>>
>>>> +
>>>> +		if (mode_rate > max_rate && !dsc)
>>> This needs a status = MODE_CLOCK_HIGH as well.
>>>
>>> With the above fixed:
>>> Reviewed-by: Imre Deak <imre.deak@intel.com>
>>>
>>>> +			continue;
>>>> +
>>>> +		status = intel_mode_valid_max_plane_size(display, mode, num_joined_pipes);
>>>> +		if (status != MODE_OK)
>>>> +			continue;
>>>> +
>>>> +		max_dotclk *= num_joined_pipes;
>>>> +
>>>> +		if (target_clock > max_dotclk) {
>>>> +			status = MODE_CLOCK_HIGH;
>>>> +			continue;
>>>> +		}
>>>> +
>>>> +		break;
>>>>    	}
>>>> -	if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
>>>> -		return MODE_CLOCK_HIGH;
>>>> -
>>>> -	status = intel_mode_valid_max_plane_size(display, mode, num_joined_pipes);
>>>>    	if (status != MODE_OK)
>>>>    		return status;
>>>> -	if (mode_rate > max_rate && !dsc)
>>>> -		return MODE_CLOCK_HIGH;
>>>> -
>>>>    	return intel_dp_mode_valid_downstream(connector, mode, target_clock);
>>>>    }
>>>> -- 
>>>> 2.45.2
>>>>

  reply	other threads:[~2026-02-02  9:29 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-30  8:17 [PATCH 00/17] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-30  8:17 ` [PATCH 01/17] drm/i915/dp: Early reject bad hdisplay in intel_dp_mode_valid Ankit Nautiyal
2026-01-30  8:17 ` [PATCH 02/17] drm/i915/dp: Move num_joined_pipes and related checks together Ankit Nautiyal
2026-01-30  8:17 ` [PATCH 03/17] drm/i915/dp: Extract helper to get the hdisplay limit Ankit Nautiyal
2026-01-30  8:17 ` [PATCH 04/17] drm/i915/dp: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-02-02  8:40   ` Imre Deak
2026-02-02  9:24     ` Nautiyal, Ankit K
2026-02-02  9:27       ` Imre Deak
2026-02-02  9:29         ` Nautiyal, Ankit K [this message]
2026-01-30  8:18 ` [PATCH 05/17] drm/i915/dp: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 06/17] drm/i915/dp_mst: Move the check for dotclock at the end Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 07/17] drm/i915/dp_mst: Move the joiner dependent code together Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 08/17] drm/i915/dp_mst: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-02-02  8:50   ` Imre Deak
2026-01-30  8:18 ` [PATCH 09/17] drm/i915/dp_mst: Extract helper to compute link for given joiner config Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 10/17] drm/i915/dp_mst: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 11/17] drm/i915/dp: Remove unused joiner helpers Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 12/17] drm/i915/dp: Introduce helper to check pixel rate against dotclock limits Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 13/17] drm/i915/dp: Refactor dsc_slice_count handling in intel_dp_mode_valid() Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 14/17] drm/i915/dp: Account for DSC slice overhead Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 15/17] drm/i915/dp: Add helpers for joiner candidate loops Ankit Nautiyal
2026-02-02  8:51   ` Imre Deak
2026-01-30  8:18 ` [PATCH 16/17] drm/i915/display: Add upper limit check for pixel clock Ankit Nautiyal
2026-01-30  8:18 ` [PATCH 17/17] drm/i915/display: Extend the max dotclock limit to WCL Ankit Nautiyal
2026-01-30  9:41 ` ✓ i915.CI.BAT: success for Account for DSC bubble overhead for horizontal slices (rev6) Patchwork
2026-01-30 17:45 ` ✗ i915.CI.Full: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2026-02-02 10:37 [PATCH 00/17] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-02-02 10:37 ` [PATCH 04/17] drm/i915/dp: Rework pipe joiner logic in mode_valid Ankit Nautiyal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0201449f-7f8e-44af-b7fb-e7890e8570e1@intel.com \
    --to=ankit.k.nautiyal@intel.com \
    --cc=imre.deak@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox