intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Imre Deak <imre.deak@intel.com>
To: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
Cc: <intel-gfx@lists.freedesktop.org>,
	<intel-xe@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v3 2/6] drm/dp: Add helpers to query the branch DSC max throughput/line-width
Date: Fri, 26 Sep 2025 21:37:57 +0300	[thread overview]
Message-ID: <aNbdheibEF1nLDLJ@ideak-desk> (raw)
In-Reply-To: <aNbBe86ubBUMF3L8@intel.com>

On Fri, Sep 26, 2025 at 07:38:19PM +0300, Ville Syrjälä wrote:
> On Wed, Sep 24, 2025 at 06:23:28PM +0300, Imre Deak wrote:
> > Add helpers to query the DP branch device's per-slice throughput as well
> > as overall throughput and line-width capabilities.
> > 
> > Cc: dri-devel@lists.freedesktop.org
> > Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/display/drm_dp_helper.c | 133 ++++++++++++++++++++++++
> >  include/drm/display/drm_dp.h            |   1 +
> >  include/drm/display/drm_dp_helper.h     |   5 +
> >  3 files changed, 139 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
> > index 1c74fe9459ad9..9d9928daaab59 100644
> > --- a/drivers/gpu/drm/display/drm_dp_helper.c
> > +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> > @@ -2844,6 +2844,139 @@ int drm_dp_dsc_sink_supported_input_bpcs(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_S
> >  }
> >  EXPORT_SYMBOL(drm_dp_dsc_sink_supported_input_bpcs);
> >  
> > +/*
> > + * See DP Standard v2.1a 2.8.4 Minimum Slices/Display, Table 2-159 and
> > + * Appendix L.1 Derivation of Slice Count Requirements.
> > + */
> > +static int dsc_branch_min_slice_throughput(int pixel_clock)

Based on the comment later I'll rename this to
dsc_sink_min_slice_throughput().

> > +{
> > +	if (pixel_clock >= 4800000)
> > +		return 600000;
> > +	else if (pixel_clock >= 2700000)
> > +		return 400000;
> > +	else
> > +		return 340000;
> 
> One slightly worrying thing in the spec says the ppr is the cumulative
> rate for all streams feeding a single display. Then elsewhere it seems
> to be saying this only applies to MST streams. So I guess multiple
> links with SST doesn't count. And it's anyone's guess which way multiple
> links with MST should be interpreted...
> 
> Anyways, that's not really something this helper needs to deal with.
> But perhaps the "pixel_clock" needs to be changed to something else.
> So just "peak_pixel_rate"? Or maybe even "cumulative_peak_pixel_rate" 
> for extra clarity?

Ok, I didn't consider this. As we discussed off-line, this may matter
for a multi-tile MST display. For those the total pixel rate of all the
tiles should be considered here and then calculate from that first the
total count of slices spanning all the tiles. I.e. stg like:

peak_pixel_rate = tile_pixel_rate * tile_count
total_slice_count = peak_pixel_rate / dsc_sink_min_slice_throughput(peak_pixel_rate)
tile_slice_count = total_slice_count / tile_count

(not considering the required slice alignment).

To clarify the above I'd rename pixel_clock to peak_pixel_rate and
describe in the param's documentation what it means in case of tiled
displays.

> > +}
> > +
> > +/**
> > + * drm_dp_dsc_branch_max_slice_throughput() - Branch device's max DSC pixel throughput per slice
> > + * @dsc_dpcd: DSC capabilities from DPCD
> > + * @pixel_clock: Pixel clock of mode in kHz
> > + * @is_rgb_yuv444: The mode is either RGB or YUV444
> > + *
> > + * Return the branch device's maximum per slice DSC pixel throughput, based on
> > + * the device's DPCD DSC capabilities, @pixel_clock and whether the output
> > + * format @is_rgb_yuv444 or yuv422/yuv420.
> > + *
> > + * Returns:
> > + * The maximum DSC pixel throughput per slice supported by the branch device
> > + * in kPixels/sec.
> > + */
> > +int drm_dp_dsc_branch_max_slice_throughput(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> 
> The "branch" in the name doesn't seem correct. Isn't this
> some kind of "DSC sink" limit?

Yes, I mixed this up somehow with the term for the overall branch caps.
I'll rename this to drm_dp_dsc_sink_max_slice_throughput().

> > +					   int pixel_clock, bool is_rgb_yuv444)
> > +{
> > +	int throughput;
> > +
> > +	throughput = dsc_dpcd[DP_DSC_PEAK_THROUGHPUT - DP_DSC_SUPPORT];
> > +
> > +	if (is_rgb_yuv444)
> > +		throughput = (throughput & DP_DSC_THROUGHPUT_MODE_0_MASK) >>
> > +			     DP_DSC_THROUGHPUT_MODE_0_SHIFT;
> > +	else
> > +		throughput = (throughput & DP_DSC_THROUGHPUT_MODE_1_MASK) >>
> > +			     DP_DSC_THROUGHPUT_MODE_1_SHIFT;
> > +
> > +	switch (throughput) {
> > +	case 0:
> > +		return dsc_branch_min_slice_throughput(pixel_clock);
> > +	case 1:
> > +		return 340000;
> > +	case 2 ... 14:
> > +		return 400000 + 50000 * (throughput - 2);
> > +	case 15:
> > +		return 170000;
> > +	default:
> > +		WARN(1, "Missing case %d\n", throughput);
> > +		return 340000;
> > +	}
> > +}
> > +EXPORT_SYMBOL(drm_dp_dsc_branch_max_slice_throughput);
> > +
> > +static u8 dsc_branch_dpcd_cap(const u8 dpcd[DP_DSC_BRANCH_CAP_SIZE], int reg)
> > +{
> > +	return dpcd[reg - DP_DSC_BRANCH_OVERALL_THROUGHPUT_0];
> > +}
> > +
> > +/**
> > + * drm_dp_dsc_branch_max_overall_throughput() - Branch device's max overall DSC pixel throughput
> > + * @dsc_branch_dpcd: DSC branch capabilities from DPCD
> > + * @is_rgb_yuv444: The mode is either RGB or YUV444
> > + *
> > + * Return the branch device's maximum overall DSC pixel throughput, based on
> > + * the device's DPCD DSC branch capabilities, and whether the output
> > + * format @is_rgb_yuv444 or yuv422/yuv420.
> > + *
> > + * Returns:
> > + * - 0:   The maximum overall throughput capability is not indicated by
> > + *        the device separately and it must be determined from the per-slice
> > + *        max throughput (see @drm_dp_dsc_branch_slice_max_throughput())
> > + *        and the maximum slice count supported by the device.
> > + * - > 0: The maximum overall DSC pixel throughput supported by the branch
> > + *        device in kPixels/sec.
> > + */
> > +int drm_dp_dsc_branch_max_overall_throughput(const u8 dsc_branch_dpcd[DP_DSC_BRANCH_CAP_SIZE],
> > +					     bool is_rgb_yuv444)
> > +{
> > +	int throughput;
> > +
> > +	if (is_rgb_yuv444)
> > +		throughput = dsc_branch_dpcd_cap(dsc_branch_dpcd,
> > +						 DP_DSC_BRANCH_OVERALL_THROUGHPUT_0);
> > +	else
> > +		throughput = dsc_branch_dpcd_cap(dsc_branch_dpcd,
> > +						 DP_DSC_BRANCH_OVERALL_THROUGHPUT_1);
> > +
> > +	switch (throughput) {
> > +	case 0:
> > +		return 0;
> > +	case 1:
> > +		return 680000;
> > +	default:
> > +		return 600000 + 50000 * throughput;
> > +	}
> > +}
> > +EXPORT_SYMBOL(drm_dp_dsc_branch_max_overall_throughput);
> > +
> > +/**
> > + * drm_dp_dsc_branch_max_line_width() - Branch device's max DSC line width
> > + * @dsc_branch_dpcd: DSC branch capabilities from DPCD
> > + *
> > + * Return the branch device's maximum overall DSC line width, based on
> > + * the device's @dsc_branch_dpcd capabilities.
> > + *
> > + * Returns:
> > + * - 0:        The maximum line width is not indicated by the device
> > + *             separately and it must be determined from the maximum
> > + *             slice count and slice-width supported by the device.
> > + * - %-EINVAL: The device indicates an invalid maximum line width
> > + *             (< 2560 pixels).
> > + * - >= 2560:  The maximum line width in pixels.
> > + */
> > +int drm_dp_dsc_branch_max_line_width(const u8 dsc_branch_dpcd[DP_DSC_BRANCH_CAP_SIZE])
> > +{
> > +	int line_width = dsc_branch_dpcd_cap(dsc_branch_dpcd, DP_DSC_BRANCH_MAX_LINE_WIDTH);
> > +
> > +	switch (line_width) {
> > +	case 0:
> > +		return 0;
> > +	case 1 ... 7:
> > +		return -EINVAL;
> > +	default:
> > +		return line_width * 320;
> > +	}
> > +}
> > +EXPORT_SYMBOL(drm_dp_dsc_branch_max_line_width);
> > +
> >  static int drm_dp_read_lttpr_regs(struct drm_dp_aux *aux,
> >  				  const u8 dpcd[DP_RECEIVER_CAP_SIZE], int address,
> >  				  u8 *buf, int buf_size)
> > diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
> > index cf318e3ddb5c5..43978ddd15056 100644
> > --- a/include/drm/display/drm_dp.h
> > +++ b/include/drm/display/drm_dp.h
> > @@ -1686,6 +1686,7 @@ enum drm_dp_phy {
> >  #define DP_BRANCH_OUI_HEADER_SIZE	0xc
> >  #define DP_RECEIVER_CAP_SIZE		0xf
> >  #define DP_DSC_RECEIVER_CAP_SIZE        0x10 /* DSC Capabilities 0x60 through 0x6F */
> > +#define DP_DSC_BRANCH_CAP_SIZE		3
> >  #define EDP_PSR_RECEIVER_CAP_SIZE	2
> >  #define EDP_DISPLAY_CTL_CAP_SIZE	5
> >  #define DP_LTTPR_COMMON_CAP_SIZE	8
> > diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
> > index e438c44409952..cb0cd13d632d2 100644
> > --- a/include/drm/display/drm_dp_helper.h
> > +++ b/include/drm/display/drm_dp_helper.h
> > @@ -211,6 +211,11 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> >  u8 drm_dp_dsc_sink_line_buf_depth(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]);
> >  int drm_dp_dsc_sink_supported_input_bpcs(const u8 dsc_dpc[DP_DSC_RECEIVER_CAP_SIZE],
> >  					 u8 dsc_bpc[3]);
> > +int drm_dp_dsc_branch_max_slice_throughput(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> > +					   int mode_clock, bool is_rgb_yuv444);
> > +int drm_dp_dsc_branch_max_overall_throughput(const u8 dsc_branch_dpcd[DP_DSC_BRANCH_CAP_SIZE],
> > +					     bool is_rgb_yuv444);
> > +int drm_dp_dsc_branch_max_line_width(const u8 dsc_branch_dpcd[DP_DSC_BRANCH_CAP_SIZE]);
> >  
> >  static inline bool
> >  drm_dp_sink_supports_dsc(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
> > -- 
> > 2.49.1
> 
> -- 
> Ville Syrjälä
> Intel

  reply	other threads:[~2025-09-26 18:39 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-24 15:23 [PATCH v3 0/6] drm/i915/dp: Work around a DSC pixel throughput issue Imre Deak
2025-09-24 15:23 ` [PATCH v3 1/6] drm/dp: Add quirk for Synaptics DSC throughput link-bpp limit Imre Deak
2025-09-24 15:23 ` [PATCH v3 2/6] drm/dp: Add helpers to query the branch DSC max throughput/line-width Imre Deak
2025-09-26 16:38   ` Ville Syrjälä
2025-09-26 18:37     ` Imre Deak [this message]
2025-09-24 15:23 ` [PATCH v3 3/6] drm/i915/dp: Calculate DSC slice count based on per-slice peak throughput Imre Deak
2025-09-24 15:23 ` [PATCH v3 4/6] drm/i915/dp: Pass DPCD device descriptor to intel_dp_get_dsc_sink_cap() Imre Deak
2025-09-24 15:23 ` [PATCH v3 5/6] drm/i915/dp: Verify branch devices' overall pixel throughput/line width Imre Deak
2025-09-24 15:23 ` [PATCH v3 6/6] drm/i915/dp: Handle Synaptics DSC throughput link-bpp quirk Imre Deak
2025-09-24 17:25 ` ✓ i915.CI.BAT: success for drm/i915/dp: Work around a DSC pixel throughput issue (rev4) Patchwork
2025-09-25  0:47 ` ✓ i915.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aNbdheibEF1nLDLJ@ideak-desk \
    --to=imre.deak@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-gfx@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ville.syrjala@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).