public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Nautiyal, Ankit K" <ankit.k.nautiyal@intel.com>
To: <imre.deak@intel.com>
Cc: <intel-gfx@lists.freedesktop.org>,
	<intel-xe@lists.freedesktop.org>, <jani.nikula@linux.intel.com>
Subject: Re: [PATCH 04/16] drm/i915/dp: Rework pipe joiner logic in mode_valid
Date: Thu, 29 Jan 2026 11:18:28 +0530	[thread overview]
Message-ID: <0fea6800-d846-450f-af08-5fa90f6827bf@intel.com> (raw)
In-Reply-To: <079b962e-c649-4abd-a75e-5c50800542be@intel.com>


On 1/29/2026 10:51 AM, Nautiyal, Ankit K wrote:
>
> On 1/28/2026 10:16 PM, Imre Deak wrote:
>> On Wed, Jan 28, 2026 at 07:36:24PM +0530, Ankit Nautiyal wrote:
>>> Currently in intel_dp_mode_valid(), we compute the number of joined 
>>> pipes
>>> required before deciding whether DSC is needed. This ordering 
>>> prevents us
>>> from accounting for DSC-related overhead when determining pipe
>>> requirements.
>>>
>>> It is not possible to first decide whether DSC is needed and then 
>>> compute
>>> the required number of joined pipes, because the two depend on each 
>>> other:
>>>
>>>   - DSC need is a function of the pipe count (e.g., 4‑pipe always 
>>> requires
>>>     DSC; 2‑pipe may require it if uncompressed joiner is unavailable).
>>>
>>>   - Whether a given pipe‑join configuration is sufficient depends on
>>>     effective bandwidth, which itself changes when DSC is used.
>>>
>>> As a result, the only correct approach is to iterate candidate pipe 
>>> counts.
>>>
>>> So, refactor the logic to start with a single pipe and incrementally 
>>> try
>>> additional pipes only if needed. While DSC overhead is not yet computed
>>> here, this restructuring prepares the code to support that in a 
>>> follow-up
>>> changes.
>>>
>>> Additionally, if a forced joiner configuration is present, we first 
>>> check
>>> whether it satisfies the bandwidth and timing constraints. If it 
>>> does not,
>>> we fall back to evaluating configurations with 1, 2, or 4 pipes joined
>>> and prune or keep the mode accordingly.
>>>
>>> v2:
>>>   - Iterate over number of pipes to be joined instead of joiner
>>>     candidates. (Jani)
>>>   - Document the rationale of iterating over number of joined pipes.
>>>     (Imre)
>>>
>>> Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/display/intel_dp.c | 158 
>>> +++++++++++++++---------
>>>   1 file changed, 103 insertions(+), 55 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c 
>>> b/drivers/gpu/drm/i915/display/intel_dp.c
>>> index 4c3a1b6d0015..599965a6e1a6 100644
>>> --- a/drivers/gpu/drm/i915/display/intel_dp.c
>>> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
>>> @@ -1434,6 +1434,23 @@ bool intel_dp_has_dsc(const struct 
>>> intel_connector *connector)
>>>       return true;
>>>   }
>>>   +static
>>> +bool intel_dp_can_join(struct intel_display *display,
>>> +               int num_joined_pipes)
>>> +{
>>> +    switch (num_joined_pipes) {
>>> +    case 1:
>>> +        return true;
>>> +    case 2:
>>> +        return HAS_BIGJOINER(display) ||
>>> +               HAS_UNCOMPRESSED_JOINER(display);
>>> +    case 4:
>>> +        return HAS_ULTRAJOINER(display);
>>> +    default:
>>> +        return false;
>>> +    }
>>> +}
>>> +
>>>   static enum drm_mode_status
>>>   intel_dp_mode_valid(struct drm_connector *_connector,
>>>               const struct drm_display_mode *mode)
>>> @@ -1445,13 +1462,13 @@ intel_dp_mode_valid(struct drm_connector 
>>> *_connector,
>>>       const struct drm_display_mode *fixed_mode;
>>>       int target_clock = mode->clock;
>>>       int max_rate, mode_rate, max_lanes, max_link_clock;
>>> -    int max_dotclk = display->cdclk.max_dotclk_freq;
>>>       u16 dsc_max_compressed_bpp = 0;
>>>       u8 dsc_slice_count = 0;
>>>       enum drm_mode_status status;
>>>       bool dsc = false;
>>>       int num_joined_pipes;
>>>       int link_bpp_x16;
>>> +    int num_pipes;
>>>         status = intel_cpu_transcoder_mode_valid(display, mode);
>>>       if (status != MODE_OK)
>>> @@ -1488,67 +1505,98 @@ intel_dp_mode_valid(struct drm_connector 
>>> *_connector,
>>>                          target_clock, mode->hdisplay,
>>>                          link_bpp_x16, 0);
>>>   -    num_joined_pipes = intel_dp_num_joined_pipes(intel_dp, 
>>> connector,
>>> -                             mode->hdisplay, target_clock);
>>> -    max_dotclk *= num_joined_pipes;
>>> -
>>> -    if (target_clock > max_dotclk)
>>> -        return MODE_CLOCK_HIGH;
>>> -
>>> -    status = intel_pfit_mode_valid(display, mode, output_format, 
>>> num_joined_pipes);
>>> -    if (status != MODE_OK)
>>> -        return status;
>>> -
>>> -    if (intel_dp_has_dsc(connector)) {
>>> -        int pipe_bpp;
>>> -
>>> -        /*
>>> -         * TBD pass the connector BPC,
>>> -         * for now U8_MAX so that max BPC on that platform would be 
>>> picked
>>> -         */
>>> -        pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
>>> -
>>> -        /*
>>> -         * Output bpp is stored in 6.4 format so right shift by 4 
>>> to get the
>>> -         * integer value since we support only integer values of bpp.
>>> -         */
>>> -        if (intel_dp_is_edp(intel_dp)) {
>>> -            dsc_max_compressed_bpp =
>>> - drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
>>> -
>>> -            dsc_slice_count =
>>> -                intel_dp_dsc_get_slice_count(connector,
>>> -                                 target_clock,
>>> -                                 mode->hdisplay,
>>> -                                 num_joined_pipes);
>>> -
>>> -            dsc = dsc_max_compressed_bpp && dsc_slice_count;
>>> -        } else if 
>>> (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
>>> -            unsigned long bw_overhead_flags = 0;
>>> -
>>> -            if (!drm_dp_is_uhbr_rate(max_link_clock))
>>> -                bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
>>> -
>>> -            dsc = intel_dp_mode_valid_with_dsc(connector,
>>> -                               max_link_clock, max_lanes,
>>> -                               target_clock, mode->hdisplay,
>>> -                               num_joined_pipes,
>>> -                               output_format, pipe_bpp,
>>> -                               bw_overhead_flags);
>>> +    /*
>>> +     * We cannot determine the required pipe‑join count before 
>>> knowing whether
>>> +     * DSC is needed, nor can we determine DSC need without knowing 
>>> the pipe
>>> +     * count.
>>> +     * Because of this dependency cycle, the only correct approach 
>>> is to iterate
>>> +     * over candidate pipe counts and evaluate each combination.
>>> +     */
>>> +    for (num_pipes = 0; num_pipes < I915_MAX_PIPES; num_pipes++) {
>>> +        int max_dotclk = display->cdclk.max_dotclk_freq;
>>> +
>>> +        status = MODE_CLOCK_HIGH;
>>> +
>>> +        if (num_pipes == 0) {
>>> +            if (!connector->force_joined_pipes)
>>> +                continue;
>>> +            num_joined_pipes = connector->force_joined_pipes;
>>> +        } else {
>>> +            num_joined_pipes = num_pipes;
>>> +        }
>> The current way is to try connector->force_joined_pipes and fail the
>> commit if that doesn't work. Here you'd change that to fall back trying
>> non-forced pipe-joined configs in that case. If that's needed (not sure
>> if that's a good idea, since then the user wouldn't know which case
>> succeeded or failed), it should be a separate change. Here it could be
>> simply an if (forced_joined_pipes && num_pipes != forced_joined_pipes)
>> continue and then use num_pipes instead of num_joined_pipes later in the
>> loop.
>
>
> Hmm Yeah the initial thought process was to try force joiner thing 
> first, if succeed well and good, if not we do our usual thing.
>
> But I see your point. I think we can just fail the commit if forced 
> joiner doesn't work.
>
> This will also help simplify the loop and will allow to have an 
> iterator macro as you have mentioned in the later patch.


For mode_valid phase can there be a corner case where force joiner is 
set to some high value with which all modes somehow fail?

Can hdisplay/htotal or clock  for a mode go below some acceptable value 
if joiner is used?

Regards,

Ankit

>
>
>>
>>> +
>>> +        if (!intel_dp_can_join(display, num_joined_pipes))
>>> +            continue;
>>> +
>>> +        if (mode->hdisplay > num_joined_pipes * 
>>> intel_dp_max_hdisplay_per_pipe(display))
>>> +            continue;
>>> +
>>> +        status = intel_pfit_mode_valid(display, mode, 
>>> output_format, num_joined_pipes);
>>> +        if (status != MODE_OK)
>>> +            continue;
>>> +
>>> +        if (intel_dp_has_dsc(connector)) {
>>> +            int pipe_bpp;
>>> +
>>> +            /*
>>> +             * TBD pass the connector BPC,
>>> +             * for now U8_MAX so that max BPC on that platform 
>>> would be picked
>>> +             */
>>> +            pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, 
>>> U8_MAX);
>>> +
>>> +            /*
>>> +             * Output bpp is stored in 6.4 format so right shift by 
>>> 4 to get the
>>> +             * integer value since we support only integer values 
>>> of bpp.
>>> +             */
>>> +            if (intel_dp_is_edp(intel_dp)) {
>>> +                dsc_max_compressed_bpp =
>>> + drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
>>> +
>>> +                dsc_slice_count =
>>> +                    intel_dp_dsc_get_slice_count(connector,
>>> +                                     target_clock,
>>> +                                     mode->hdisplay,
>>> +                                     num_joined_pipes);
>>> +
>>> +                dsc = dsc_max_compressed_bpp && dsc_slice_count;
>>> +            } else if 
>>> (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
>>> +                unsigned long bw_overhead_flags = 0;
>>> +
>>> +                if (!drm_dp_is_uhbr_rate(max_link_clock))
>>> +                    bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
>>> +
>>> +                dsc = intel_dp_mode_valid_with_dsc(connector,
>>> +                                   max_link_clock, max_lanes,
>>> +                                   target_clock, mode->hdisplay,
>>> +                                   num_joined_pipes,
>>> +                                   output_format, pipe_bpp,
>>> +                                   bw_overhead_flags);
>>> +            }
>>> +        }
>>> +
>>> +        if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && 
>>> !dsc)
>>> +            continue;
>>> +
>>> +        if (mode_rate > max_rate && !dsc)
>>> +            continue;
>>> +
>>> +        status = intel_mode_valid_max_plane_size(display, mode, 
>>> num_joined_pipes);
>>> +        if (status != MODE_OK)
>>> +            continue;
>>> +
>>> +        max_dotclk *= num_joined_pipes;
>>> +
>>> +        if (target_clock <= max_dotclk) {
>>> +            status = MODE_OK;
>> status stays MODE_OK if target_clock > max_dotclk.
>
> Oh yes.. will fix this.
>
>
>>
>>> +            break;
>>>           }
>>>       }
>>>   -    if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && 
>>> !dsc)
>>> -        return MODE_CLOCK_HIGH;
>>> -
>>> -    status = intel_mode_valid_max_plane_size(display, mode, 
>>> num_joined_pipes);
>>>       if (status != MODE_OK)
>>>           return status;
>>>   -    if (mode_rate > max_rate && !dsc)
>>> -        return MODE_CLOCK_HIGH;
>>> -
>>>       return intel_dp_mode_valid_downstream(connector, mode, 
>>> target_clock);
>>> +
>> Extra w/s.
>
> Will drop this extra line.
>
>
> Thanks & Regards,
>
> Ankit
>
>>>   }
>>>     bool intel_dp_source_supports_tps3(struct intel_display *display)
>>> -- 
>>> 2.45.2
>>>

  reply	other threads:[~2026-01-29  5:48 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-28 14:06 [PATCH 00/16] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 01/16] drm/i915/dp: Early reject bad hdisplay in intel_dp_mode_valid Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 02/16] drm/i915/dp: Move num_joined_pipes and related checks together Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 03/16] drm/i915/dp: Extract helper to get the hdisplay limit Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 04/16] drm/i915/dp: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-28 16:46   ` Imre Deak
2026-01-29  5:21     ` Nautiyal, Ankit K
2026-01-29  5:48       ` Nautiyal, Ankit K [this message]
2026-01-29  9:24         ` Imre Deak
2026-01-29 10:01           ` Nautiyal, Ankit K
2026-01-28 14:06 ` [PATCH 05/16] drm/i915/dp: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-28 17:03   ` Imre Deak
2026-01-28 14:06 ` [PATCH 06/16] drm/i915/dp_mst: Move the check for dotclock at the end Ankit Nautiyal
2026-01-28 17:07   ` Imre Deak
2026-01-28 14:06 ` [PATCH 07/16] drm/i915/dp_mst: Move the joiner dependent code together Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 08/16] drm/i915/dp_mst: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-28 21:21   ` Imre Deak
2026-01-28 14:06 ` [PATCH 09/16] drm/i915/dp_mst: Extract helper to compute link for given joiner config Ankit Nautiyal
2026-01-28 14:06 ` [PATCH 10/16] drm/i915/dp_mst: Rework pipe joiner logic in compute_config Ankit Nautiyal
2026-01-28 22:06   ` Imre Deak
2026-01-28 22:11     ` Imre Deak
2026-01-28 14:06 ` [PATCH 11/16] drm/i915/dp: Introduce helper to check pixel rate against dotclock limits Ankit Nautiyal
2026-01-28 22:17   ` Imre Deak
2026-01-29  3:57   ` kernel test robot
2026-01-28 14:06 ` [PATCH 12/16] drm/i915/dp: Refactor dsc_slice_count handling in intel_dp_mode_valid() Ankit Nautiyal
2026-01-28 22:19   ` Imre Deak
2026-01-28 14:06 ` [PATCH 13/16] drm/i915/dp: Account for DSC slice overhead Ankit Nautiyal
2026-01-28 22:35   ` Imre Deak
2026-01-28 14:06 ` [PATCH 14/16] drm/i915/dp: Add helpers for joiner candidate loops Ankit Nautiyal
2026-01-28 23:00   ` Imre Deak
2026-01-28 14:06 ` [PATCH 15/16] drm/i915/display: Add upper limit check for pixel clock Ankit Nautiyal
2026-01-28 20:49   ` Imre Deak
2026-01-28 14:06 ` [PATCH 16/16] drm/i915/display: Extend the max dotclock limit to WCL and pre PTL platforms Ankit Nautiyal
2026-01-28 20:53   ` Imre Deak
2026-01-28 15:13 ` ✓ i915.CI.BAT: success for Account for DSC bubble overhead for horizontal slices (rev4) Patchwork
2026-01-28 20:04 ` ✗ i915.CI.Full: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2026-01-29 17:11 [PATCH 00/16] Account for DSC bubble overhead for horizontal slices Ankit Nautiyal
2026-01-29 17:11 ` [PATCH 04/16] drm/i915/dp: Rework pipe joiner logic in mode_valid Ankit Nautiyal
2026-01-29 19:31   ` Imre Deak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0fea6800-d846-450f-af08-5fa90f6827bf@intel.com \
    --to=ankit.k.nautiyal@intel.com \
    --cc=imre.deak@intel.com \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jani.nikula@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox