intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation
@ 2025-11-27 17:49 Imre Deak
  2025-11-27 17:49 ` [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5 Imre Deak
                   ` (54 more replies)
  0 siblings, 55 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe; +Cc: Ankit Nautiyal

This patchset cleans up the DP link BW and the DSC slice config
computation with the following aims:

- Fix the BW calculation taking into account all the MST,DSC,SSC,FEC
  related overhead and use the proper UHBR/non-UHBR BW utilization
  factor (which depends on the symbol size vs. effective data size
  ratio).
- Unify the BW calculation during mode validation and state computation.
- Unify the BW calculation for eDP / DP-SST / DP-MST.
- Compute the DSC slice configuration in a way better reflecting the
  pipe/DSC stream engine/slice parameters of the config.

Cc: Ankit Nautiyal <ankit.k.nautiyal@intel.com>

Imre Deak (50):
  drm/dp: Parse all DSC slice count caps for eDP 1.5
  drm/dp: Add drm_dp_dsc_sink_slice_count_mask()
  drm/i915/dp: Fix DSC sink's slice count capability check
  drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp()
  drm/i915/dp: Use a mode's crtc_clock vs. clock during state
    computation
  drm/i915/dp: Factor out intel_dp_link_bw_overhead()
  drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config()
  drm/i915/dp: Use the effective data rate for DP BW calculation
  drm/i915/dp: Use the effective data rate for DP compressed BW
    calculation
  drm/i915/dp: Account with MST,SSC BW overhead for uncompressed DP-MST
    stream BW
  drm/i915/dp: Account with DSC BW overhead for compressed DP-SST stream
    BW
  drm/i915/dp: Account with pipe joiner max compressed BPP limit for
    DP-MST and eDP
  drm/i915/dp: Drop unused timeslots param from
    dsc_compute_link_config()
  drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  drm/i915/dp: Fail state computation for invalid min/max link BPP
    values
  drm/i915/dp: Fail state computation for invalid max throughput BPP
    value
  drm/i915/dp: Fail state computation for invalid max sink compressed
    BPP value
  drm/i915/dp: Fail state computation for invalid DSC source input BPP
    values
  drm/i915/dp: Align min/max DSC input BPPs to sink caps
  drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits
  drm/i915/dp: Drop intel_dp parameter from
    intel_dp_compute_config_link_bpp_limits()
  drm/i915/dp: Pass intel_output_format to
    intel_dp_dsc_sink_{min_max}_compressed_bpp()
  drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16()
  drm/i915/dp: Factor out compute_min_compressed_bpp_x16()
  drm/i915/dp: Factor out compute_max_compressed_bpp_x16()
  drm/i915/dp: Add intel_dp_mode_valid_with_dsc()
  drm/i915/dp: Unify detect and compute time DSC mode BW validation
  drm/i915/dp: Use helpers to align min/max compressed BPPs
  drm/i915/dp: Simplify computing DSC BPPs for eDP
  drm/i915/dp: Simplify computing DSC BPPs for DP-SST
  drm/i915/dp: Simplify computing forced DSC BPP for DP-SST
  drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP
  drm/i915/dp: Simplify eDP vs. DP compressed BPP computation
  drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST
  drm/i915/dsc: Track the detaild DSC slice configuration
  drm/i915/dsc: Track the DSC stream count in the DSC slice config state
  drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to
    fill_dsc()
  drm/i915/dsi: Track the detailed DSC slice configuration
  drm/i915/dp: Track the detailed DSC slice configuration
  drm/i915/dsc: Switch to using intel_dsc_line_slice_count()
  drm/i915/dp: Factor out intel_dp_dsc_min_slice_count()
  drm/i915/dp: Use int for DSC slice count variables
  drm/i915/dp: Rename test_slice_count to slices_per_line
  drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe
    iteration
  drm/i915/dsc: Add intel_dsc_get_slice_config()
  drm/i915/dsi: Use intel_dsc_get_slice_config()
  drm/i915/dp: Unify DP and eDP slice count computation
  drm/i915/dp: Add intel_dp_dsc_get_slice_config()
  drm/i915/dp: Use intel_dp_dsc_get_slice_config()

 drivers/gpu/drm/display/drm_dp_helper.c       | 103 ++-
 drivers/gpu/drm/i915/display/icl_dsi.c        |   6 -
 drivers/gpu/drm/i915/display/intel_bios.c     |  27 +-
 drivers/gpu/drm/i915/display/intel_display.c  |   2 +-
 .../drm/i915/display/intel_display_types.h    |   7 +-
 drivers/gpu/drm/i915/display/intel_dp.c       | 868 +++++++++---------
 drivers/gpu/drm/i915/display/intel_dp.h       |  26 +-
 .../drm/i915/display/intel_dp_link_training.c |   4 +-
 drivers/gpu/drm/i915/display/intel_dp_mst.c   | 110 +--
 drivers/gpu/drm/i915/display/intel_vdsc.c     |  71 +-
 drivers/gpu/drm/i915/display/intel_vdsc.h     |   6 +
 include/drm/display/drm_dp_helper.h           |   3 +
 12 files changed, 658 insertions(+), 575 deletions(-)

-- 
2.49.1


^ permalink raw reply	[flat|nested] 137+ messages in thread

* [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-08 11:24   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 02/50] drm/dp: Add drm_dp_dsc_sink_slice_count_mask() Imre Deak
                   ` (53 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe; +Cc: dri-devel

eDP 1.5 supports all the slice counts reported via DP_DSC_SLICE_CAP_1,
so adjust drm_dp_dsc_sink_max_slice_count() accordingly.

Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/display/drm_dp_helper.c | 41 +++++++++++--------------
 1 file changed, 18 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
index f9fdf19de74a9..19564c1afba6c 100644
--- a/drivers/gpu/drm/display/drm_dp_helper.c
+++ b/drivers/gpu/drm/display/drm_dp_helper.c
@@ -2725,15 +2725,7 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
 {
 	u8 slice_cap1 = dsc_dpcd[DP_DSC_SLICE_CAP_1 - DP_DSC_SUPPORT];
 
-	if (is_edp) {
-		/* For eDP, register DSC_SLICE_CAPABILITIES_1 gives slice count */
-		if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
-			return 4;
-		if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
-			return 2;
-		if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
-			return 1;
-	} else {
+	if (!is_edp) {
 		/* For DP, use values from DSC_SLICE_CAP_1 and DSC_SLICE_CAP2 */
 		u8 slice_cap2 = dsc_dpcd[DP_DSC_SLICE_CAP_2 - DP_DSC_SUPPORT];
 
@@ -2743,22 +2735,25 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
 			return 20;
 		if (slice_cap2 & DP_DSC_16_PER_DP_DSC_SINK)
 			return 16;
-		if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
-			return 12;
-		if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
-			return 10;
-		if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
-			return 8;
-		if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
-			return 6;
-		if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
-			return 4;
-		if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
-			return 2;
-		if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
-			return 1;
 	}
 
+	/* DP, eDP v1.5+ */
+	if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
+		return 12;
+	if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
+		return 10;
+	if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
+		return 8;
+	if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
+		return 6;
+	/* DP, eDP v1.4+ */
+	if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
+		return 4;
+	if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
+		return 2;
+	if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
+		return 1;
+
 	return 0;
 }
 EXPORT_SYMBOL(drm_dp_dsc_sink_max_slice_count);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 02/50] drm/dp: Add drm_dp_dsc_sink_slice_count_mask()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
  2025-11-27 17:49 ` [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5 Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-09  8:48   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check Imre Deak
                   ` (52 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe; +Cc: dri-devel

A DSC sink supporting DSC slice count N, not necessarily supports slice
counts less than N. Hence the driver should check the sink's support for
a particular slice count before using that slice count. Add the helper
functions required for this.

Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/display/drm_dp_helper.c | 82 +++++++++++++++++--------
 include/drm/display/drm_dp_helper.h     |  3 +
 2 files changed, 61 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
index 19564c1afba6c..a697cc227e289 100644
--- a/drivers/gpu/drm/display/drm_dp_helper.c
+++ b/drivers/gpu/drm/display/drm_dp_helper.c
@@ -2705,56 +2705,90 @@ u8 drm_dp_dsc_sink_bpp_incr(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
 EXPORT_SYMBOL(drm_dp_dsc_sink_bpp_incr);
 
 /**
- * drm_dp_dsc_sink_max_slice_count() - Get the max slice count
- * supported by the DSC sink.
- * @dsc_dpcd: DSC capabilities from DPCD
- * @is_edp: true if its eDP, false for DP
+ * drm_dp_dsc_slice_count_to_mask() - Convert a slice count to a slice count mask
+ * @slice_count: slice count
  *
- * Read the slice capabilities DPCD register from DSC sink to get
- * the maximum slice count supported. This is used to populate
- * the DSC parameters in the &struct drm_dsc_config by the driver.
- * Driver creates an infoframe using these parameters to populate
- * &struct drm_dsc_pps_infoframe. These are sent to the sink using DSC
- * infoframe using the helper function drm_dsc_pps_infoframe_pack()
+ * Convert @slice_count to a slice count mask.
+ *
+ * Returns the slice count mask.
+ */
+u32 drm_dp_dsc_slice_count_to_mask(int slice_count)
+{
+	return BIT(slice_count - 1);
+}
+EXPORT_SYMBOL(drm_dp_dsc_slice_count_to_mask);
+
+/**
+ * drm_dp_dsc_sink_slice_count_mask() - Get the mask of valid DSC sink slice counts
+ * @dsc_dpcd: the sink's DSC DPCD capabilities
+ * @is_edp: %true for an eDP sink
+ *
+ * Get the mask of supported slice counts from the sink's DSC DPCD register.
  *
  * Returns:
- * Maximum slice count supported by DSC sink or 0 its invalid
+ * Mask of slice counts supported by the DSC sink:
+ * - > 0: bit#0,1,3,5..,23 set if the sink supports 1,2,4,6..,24 slices
+ * - 0:   if the sink doesn't support any slices
  */
-u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
-				   bool is_edp)
+u32 drm_dp_dsc_sink_slice_count_mask(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
+				     bool is_edp)
 {
 	u8 slice_cap1 = dsc_dpcd[DP_DSC_SLICE_CAP_1 - DP_DSC_SUPPORT];
+	u32 mask = 0;
 
 	if (!is_edp) {
 		/* For DP, use values from DSC_SLICE_CAP_1 and DSC_SLICE_CAP2 */
 		u8 slice_cap2 = dsc_dpcd[DP_DSC_SLICE_CAP_2 - DP_DSC_SUPPORT];
 
 		if (slice_cap2 & DP_DSC_24_PER_DP_DSC_SINK)
-			return 24;
+			mask |= drm_dp_dsc_slice_count_to_mask(24);
 		if (slice_cap2 & DP_DSC_20_PER_DP_DSC_SINK)
-			return 20;
+			mask |= drm_dp_dsc_slice_count_to_mask(20);
 		if (slice_cap2 & DP_DSC_16_PER_DP_DSC_SINK)
-			return 16;
+			mask |= drm_dp_dsc_slice_count_to_mask(16);
 	}
 
 	/* DP, eDP v1.5+ */
 	if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
-		return 12;
+		mask |= drm_dp_dsc_slice_count_to_mask(12);
 	if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
-		return 10;
+		mask |= drm_dp_dsc_slice_count_to_mask(10);
 	if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
-		return 8;
+		mask |= drm_dp_dsc_slice_count_to_mask(8);
 	if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
-		return 6;
+		mask |= drm_dp_dsc_slice_count_to_mask(6);
 	/* DP, eDP v1.4+ */
 	if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
-		return 4;
+		mask |= drm_dp_dsc_slice_count_to_mask(4);
 	if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
-		return 2;
+		mask |= drm_dp_dsc_slice_count_to_mask(2);
 	if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
-		return 1;
+		mask |= drm_dp_dsc_slice_count_to_mask(1);
 
-	return 0;
+	return mask;
+}
+EXPORT_SYMBOL(drm_dp_dsc_sink_slice_count_mask);
+
+/**
+ * drm_dp_dsc_sink_max_slice_count() - Get the max slice count
+ * supported by the DSC sink.
+ * @dsc_dpcd: DSC capabilities from DPCD
+ * @is_edp: true if its eDP, false for DP
+ *
+ * Read the slice capabilities DPCD register from DSC sink to get
+ * the maximum slice count supported. This is used to populate
+ * the DSC parameters in the &struct drm_dsc_config by the driver.
+ * Driver creates an infoframe using these parameters to populate
+ * &struct drm_dsc_pps_infoframe. These are sent to the sink using DSC
+ * infoframe using the helper function drm_dsc_pps_infoframe_pack()
+ *
+ * Returns:
+ * Maximum slice count supported by DSC sink or 0 its invalid
+ */
+u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
+				   bool is_edp)
+{
+	return fls(drm_dp_dsc_sink_slice_count_mask(dsc_dpcd, is_edp));
 }
 EXPORT_SYMBOL(drm_dp_dsc_sink_max_slice_count);
 
diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
index df2f24b950e4c..85e868238e287 100644
--- a/include/drm/display/drm_dp_helper.h
+++ b/include/drm/display/drm_dp_helper.h
@@ -206,6 +206,9 @@ drm_dp_is_branch(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
 
 /* DP/eDP DSC support */
 u8 drm_dp_dsc_sink_bpp_incr(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]);
+u32 drm_dp_dsc_slice_count_to_mask(int slice_count);
+u32 drm_dp_dsc_sink_slice_count_mask(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
+				     bool is_edp);
 u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
 				   bool is_edp);
 u8 drm_dp_dsc_sink_line_buf_depth(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
  2025-11-27 17:49 ` [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5 Imre Deak
  2025-11-27 17:49 ` [PATCH 02/50] drm/dp: Add drm_dp_dsc_sink_slice_count_mask() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-09  8:51   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 04/50] drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp() Imre Deak
                   ` (51 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe; +Cc: dri-devel

A DSC sink supporting DSC slice count N, not necessarily supports slice
counts less than N. Hence the driver should check the sink's support for
a particular slice count before using that slice count, fix
intel_dp_dsc_get_slice_count() accordingly.

Cc: dri-devel@lists.freedesktop.org
Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 0ec82fcbcf48e..6d232c15a0b5a 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1013,6 +1013,8 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 				int num_joined_pipes)
 {
 	struct intel_display *display = to_intel_display(connector);
+	u32 sink_slice_count_mask =
+		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
 	u8 min_slice_count, i;
 	int max_slice_width;
 	int tp_rgb_yuv444;
@@ -1084,9 +1086,9 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
 			continue;
 
-		if (test_slice_count >
-		    drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd, false))
-			break;
+		if (!(drm_dp_dsc_slice_count_to_mask(test_slice_count) &
+		      sink_slice_count_mask))
+			continue;
 
 		 /*
 		  * Bigjoiner needs small joiner to be enabled.
@@ -1103,8 +1105,14 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 			return test_slice_count;
 	}
 
-	drm_dbg_kms(display->drm, "Unsupported Slice Count %d\n",
-		    min_slice_count);
+	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
+	sink_slice_count_mask <<= 1;
+	drm_dbg_kms(display->drm,
+		    "[CONNECTOR:%d:%s] Unsupported slice count (min: %d, sink supported: %*pbl)\n",
+		    connector->base.base.id, connector->base.name,
+		    min_slice_count,
+		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
+
 	return 0;
 }
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 04/50] drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (2 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-09  9:10   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 05/50] drm/i915/dp: Use a mode's crtc_clock vs. clock during state computation Imre Deak
                   ` (50 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Convert intel_dp_output_bpp() and intel_dp_mode_min_output_bpp() to
return an x16 fixed point bpp value, as this value will be always the
link BPP (either compressed or uncompressed) tracked in the same x16
fixed point format.

While at it rename
intel_dp_output_bpp() to intel_dp_output_format_link_bpp_x16() and
intel_dp_mode_min_output_bpp() to intel_dp_mode_min_link_bpp_x16() to
better reflect that these functions return an x16 link BPP value
specific to a particular output format or mode.

Also rename intel_dp_output_bpp()'s bpp parameter to pipe_bpp, to
clarify which kind of (pipe vs. link) BPP the parameter is.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c     | 41 +++++++++++----------
 drivers/gpu/drm/i915/display/intel_dp.h     |  3 +-
 drivers/gpu/drm/i915/display/intel_dp_mst.c |  4 +-
 3 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 6d232c15a0b5a..beda340d05923 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1234,7 +1234,7 @@ int intel_dp_min_bpp(enum intel_output_format output_format)
 		return 8 * 3;
 }
 
-int intel_dp_output_bpp(enum intel_output_format output_format, int bpp)
+int intel_dp_output_format_link_bpp_x16(enum intel_output_format output_format, int pipe_bpp)
 {
 	/*
 	 * bpp value was assumed to RGB format. And YCbCr 4:2:0 output
@@ -1242,9 +1242,9 @@ int intel_dp_output_bpp(enum intel_output_format output_format, int bpp)
 	 * of bytes of RGB pixel.
 	 */
 	if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
-		bpp /= 2;
+		pipe_bpp /= 2;
 
-	return bpp;
+	return fxp_q4_from_int(pipe_bpp);
 }
 
 static enum intel_output_format
@@ -1260,8 +1260,8 @@ intel_dp_sink_format(struct intel_connector *connector,
 }
 
 static int
-intel_dp_mode_min_output_bpp(struct intel_connector *connector,
-			     const struct drm_display_mode *mode)
+intel_dp_mode_min_link_bpp_x16(struct intel_connector *connector,
+			       const struct drm_display_mode *mode)
 {
 	enum intel_output_format output_format, sink_format;
 
@@ -1269,7 +1269,8 @@ intel_dp_mode_min_output_bpp(struct intel_connector *connector,
 
 	output_format = intel_dp_output_format(connector, sink_format);
 
-	return intel_dp_output_bpp(output_format, intel_dp_min_bpp(output_format));
+	return intel_dp_output_format_link_bpp_x16(output_format,
+						   intel_dp_min_bpp(output_format));
 }
 
 static bool intel_dp_hdisplay_bad(struct intel_display *display,
@@ -1341,11 +1342,11 @@ intel_dp_mode_valid_downstream(struct intel_connector *connector,
 
 	/* If PCON supports FRL MODE, check FRL bandwidth constraints */
 	if (intel_dp->dfp.pcon_max_frl_bw) {
+		int link_bpp_x16 = intel_dp_mode_min_link_bpp_x16(connector, mode);
 		int target_bw;
 		int max_frl_bw;
-		int bpp = intel_dp_mode_min_output_bpp(connector, mode);
 
-		target_bw = bpp * target_clock;
+		target_bw = fxp_q4_to_int_roundup(link_bpp_x16) * target_clock;
 
 		max_frl_bw = intel_dp->dfp.pcon_max_frl_bw;
 
@@ -1460,6 +1461,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 	enum drm_mode_status status;
 	bool dsc = false;
 	int num_joined_pipes;
+	int link_bpp_x16;
 
 	status = intel_cpu_transcoder_mode_valid(display, mode);
 	if (status != MODE_OK)
@@ -1502,8 +1504,8 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 
 	max_rate = intel_dp_max_link_data_rate(intel_dp, max_link_clock, max_lanes);
 
-	mode_rate = intel_dp_link_required(target_clock,
-					   intel_dp_mode_min_output_bpp(connector, mode));
+	link_bpp_x16 = intel_dp_mode_min_link_bpp_x16(connector, mode);
+	mode_rate = intel_dp_link_required(target_clock, fxp_q4_to_int_roundup(link_bpp_x16));
 
 	if (intel_dp_has_dsc(connector)) {
 		int pipe_bpp;
@@ -1815,9 +1817,10 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 	for (bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
 	     bpp >= fxp_q4_to_int(limits->link.min_bpp_x16);
 	     bpp -= 2 * 3) {
-		int link_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
+		int link_bpp_x16 =
+			intel_dp_output_format_link_bpp_x16(pipe_config->output_format, bpp);
 
-		mode_rate = intel_dp_link_required(clock, link_bpp);
+		mode_rate = intel_dp_link_required(clock, fxp_q4_to_int_roundup(link_bpp_x16));
 
 		for (i = 0; i < intel_dp->num_common_rates; i++) {
 			link_rate = intel_dp_common_rate(intel_dp, i);
@@ -2201,10 +2204,10 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 	struct intel_display *display = to_intel_display(intel_dp);
 	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
 	const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
-	int output_bpp;
 	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
 	int dsc_joiner_max_bpp;
 	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
+	int link_bpp_x16;
 	int bpp_x16;
 	int ret;
 
@@ -2216,8 +2219,8 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
 
 	/* Compressed BPP should be less than the Input DSC bpp */
-	output_bpp = intel_dp_output_bpp(pipe_config->output_format, pipe_bpp);
-	max_bpp_x16 = min(max_bpp_x16, fxp_q4_from_int(output_bpp) - bpp_step_x16);
+	link_bpp_x16 = intel_dp_output_format_link_bpp_x16(pipe_config->output_format, pipe_bpp);
+	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
 
 	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
 	min_bpp_x16 = round_up(limits->link.min_bpp_x16, bpp_step_x16);
@@ -3267,8 +3270,8 @@ int intel_dp_compute_min_hblank(struct intel_crtc_state *crtc_state,
 	if (crtc_state->dsc.compression_enable)
 		link_bpp_x16 = crtc_state->dsc.compressed_bpp_x16;
 	else
-		link_bpp_x16 = fxp_q4_from_int(intel_dp_output_bpp(crtc_state->output_format,
-								   crtc_state->pipe_bpp));
+		link_bpp_x16 = intel_dp_output_format_link_bpp_x16(crtc_state->output_format,
+								   crtc_state->pipe_bpp);
 
 	/* Calculate min Hblank Link Layer Symbol Cycle Count for 8b/10b MST & 128b/132b */
 	hactive_sym_cycles = drm_dp_link_symbol_cycles(max_lane_count,
@@ -3378,8 +3381,8 @@ intel_dp_compute_config(struct intel_encoder *encoder,
 	if (pipe_config->dsc.compression_enable)
 		link_bpp_x16 = pipe_config->dsc.compressed_bpp_x16;
 	else
-		link_bpp_x16 = fxp_q4_from_int(intel_dp_output_bpp(pipe_config->output_format,
-								   pipe_config->pipe_bpp));
+		link_bpp_x16 = intel_dp_output_format_link_bpp_x16(pipe_config->output_format,
+								   pipe_config->pipe_bpp);
 
 	if (intel_dp->mso_link_count) {
 		int n = intel_dp->mso_link_count;
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 200a8b267f647..97e361458f760 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -193,7 +193,8 @@ void intel_dp_pcon_dsc_configure(struct intel_dp *intel_dp,
 
 void intel_dp_invalidate_source_oui(struct intel_dp *intel_dp);
 void intel_dp_wait_source_oui(struct intel_dp *intel_dp);
-int intel_dp_output_bpp(enum intel_output_format output_format, int bpp);
+int intel_dp_output_format_link_bpp_x16(enum intel_output_format output_format,
+					int pipe_bpp);
 
 bool intel_dp_compute_config_limits(struct intel_dp *intel_dp,
 				    struct drm_connector_state *conn_state,
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 4c0b943fe86f1..1a4784f0cd6bd 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -344,8 +344,8 @@ int intel_dp_mtp_tu_compute_config(struct intel_dp *intel_dp,
 		}
 
 		link_bpp_x16 = dsc ? bpp_x16 :
-			fxp_q4_from_int(intel_dp_output_bpp(crtc_state->output_format,
-							    fxp_q4_to_int(bpp_x16)));
+			intel_dp_output_format_link_bpp_x16(crtc_state->output_format,
+							    fxp_q4_to_int(bpp_x16));
 
 		local_bw_overhead = intel_dp_mst_bw_overhead(crtc_state,
 							     false, dsc_slice_count, link_bpp_x16);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 05/50] drm/i915/dp: Use a mode's crtc_clock vs. clock during state computation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (3 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 04/50] drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-09 12:51   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 06/50] drm/i915/dp: Factor out intel_dp_link_bw_overhead() Imre Deak
                   ` (49 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The encoder state computation should use the
drm_display_mode::crtc_clock member, instead of the clock member, the
former one possibly having a necessary adjustment wrt. to the latter
due to driver specific constraints. In practice the two values should
not differ at spots changed in this patch, since only MSO and 3D modes
would make them different, neither MSO or 3D relevant here, but still
use the expected crtc_clock version for consistency.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index beda340d05923..d70cb35cf68bc 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2050,7 +2050,8 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 					continue;
 			} else {
 				if (!is_bw_sufficient_for_dsc_config(dsc_bpp_x16, link_rate,
-								     lane_count, adjusted_mode->clock,
+								     lane_count,
+								     adjusted_mode->crtc_clock,
 								     pipe_config->output_format,
 								     timeslots))
 					continue;
@@ -2211,7 +2212,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 	int bpp_x16;
 	int ret;
 
-	dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, adjusted_mode->clock,
+	dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, adjusted_mode->crtc_clock,
 								adjusted_mode->hdisplay,
 								num_joined_pipes);
 	max_bpp_x16 = min(fxp_q4_from_int(dsc_joiner_max_bpp), limits->link.max_bpp_x16);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 06/50] drm/i915/dp: Factor out intel_dp_link_bw_overhead()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (4 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 05/50] drm/i915/dp: Use a mode's crtc_clock vs. clock during state computation Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-09 12:52   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 07/50] drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config() Imre Deak
                   ` (48 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Factor out intel_dp_link_bw_overhead(), used later for BW calculation
during DP SST mode validation and state computation.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c     | 26 +++++++++++++++++++++
 drivers/gpu/drm/i915/display/intel_dp.h     |  2 ++
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 22 +++++------------
 3 files changed, 34 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index d70cb35cf68bc..4722ee26b1181 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -424,6 +424,32 @@ static int intel_dp_min_lane_count(struct intel_dp *intel_dp)
 	return 1;
 }
 
+int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
+			      int dsc_slice_count, int bpp_x16, unsigned long flags)
+{
+	int overhead;
+
+	WARN_ON(flags & ~(DRM_DP_BW_OVERHEAD_MST | DRM_DP_BW_OVERHEAD_SSC_REF_CLK |
+			  DRM_DP_BW_OVERHEAD_FEC));
+
+	if (drm_dp_is_uhbr_rate(link_clock))
+		flags |= DRM_DP_BW_OVERHEAD_UHBR;
+
+	if (dsc_slice_count)
+		flags |= DRM_DP_BW_OVERHEAD_DSC;
+
+	overhead = drm_dp_bw_overhead(lane_count, hdisplay,
+				      dsc_slice_count,
+				      bpp_x16,
+				      flags);
+
+	/*
+	 * TODO: clarify whether a minimum required by the fixed FEC overhead
+	 * in the bspec audio programming sequence is required here.
+	 */
+	return max(overhead, intel_dp_bw_fec_overhead(flags & DRM_DP_BW_OVERHEAD_FEC));
+}
+
 /*
  * The required data bandwidth for a mode with given pixel clock and bpp. This
  * is the required net bandwidth independent of the data bandwidth efficiency.
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 97e361458f760..d7f9410129f49 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -117,6 +117,8 @@ void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
 bool intel_dp_source_supports_tps3(struct intel_display *display);
 bool intel_dp_source_supports_tps4(struct intel_display *display);
 
+int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
+			      int dsc_slice_count, int bpp_x16, unsigned long flags);
 int intel_dp_link_required(int pixel_clock, int bpp);
 int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				 int bw_overhead);
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 1a4784f0cd6bd..c1058b4a85d02 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -180,26 +180,16 @@ static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state,
 	const struct drm_display_mode *adjusted_mode =
 		&crtc_state->hw.adjusted_mode;
 	unsigned long flags = DRM_DP_BW_OVERHEAD_MST;
-	int overhead;
 
-	flags |= intel_dp_is_uhbr(crtc_state) ? DRM_DP_BW_OVERHEAD_UHBR : 0;
 	flags |= ssc ? DRM_DP_BW_OVERHEAD_SSC_REF_CLK : 0;
 	flags |= crtc_state->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
 
-	if (dsc_slice_count)
-		flags |= DRM_DP_BW_OVERHEAD_DSC;
-
-	overhead = drm_dp_bw_overhead(crtc_state->lane_count,
-				      adjusted_mode->hdisplay,
-				      dsc_slice_count,
-				      bpp_x16,
-				      flags);
-
-	/*
-	 * TODO: clarify whether a minimum required by the fixed FEC overhead
-	 * in the bspec audio programming sequence is required here.
-	 */
-	return max(overhead, intel_dp_bw_fec_overhead(crtc_state->fec_enable));
+	return intel_dp_link_bw_overhead(crtc_state->port_clock,
+					 crtc_state->lane_count,
+					 adjusted_mode->hdisplay,
+					 dsc_slice_count,
+					 bpp_x16,
+					 flags);
 }
 
 static void intel_dp_mst_compute_m_n(const struct intel_crtc_state *crtc_state,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 07/50] drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (5 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 06/50] drm/i915/dp: Factor out intel_dp_link_bw_overhead() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-09 12:53   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 08/50] drm/i915/dp: Use the effective data rate for DP BW calculation Imre Deak
                   ` (47 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

is_bw_sufficient_for_dsc_config() should return true if the required BW
equals the available BW, make it so.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 4722ee26b1181..4556a57db7c02 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2029,7 +2029,7 @@ static bool is_bw_sufficient_for_dsc_config(int dsc_bpp_x16, u32 link_clock,
 	available_bw = (link_clock * lane_count * timeslots * 16)  / 8;
 	required_bw = dsc_bpp_x16 * (intel_dp_mode_to_fec_clock(mode_clock));
 
-	return available_bw > required_bw;
+	return available_bw >= required_bw;
 }
 
 static int dsc_compute_link_config(struct intel_dp *intel_dp,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 08/50] drm/i915/dp: Use the effective data rate for DP BW calculation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (6 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 07/50] drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-10 12:48   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 09/50] drm/i915/dp: Use the effective data rate for DP compressed " Imre Deak
                   ` (46 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Use intel_dp_effective_data_rate() to calculate the required link BW for
eDP, DP-SST and MST links. This ensures that the BW is calculated the
same way for all DP output types, during mode validation as well as
during state computation. This approach also allows for accounting with
BW overheads due to the SSC, DSC, FEC being enabled on a link, as well
as due to the MST symbol alignment on the link. Accounting for these
overheads will be added by follow-up changes.

This way also computes the stream BW on a UHBR link correctly, using the
corresponding symbol size to effective data size ratio (i.e. ~97% link
BW utilization for UHBR vs. only ~80% for non-UHBR).

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c       | 40 +++++++++++--------
 drivers/gpu/drm/i915/display/intel_dp.h       |  4 +-
 .../drm/i915/display/intel_dp_link_training.c |  4 +-
 drivers/gpu/drm/i915/display/intel_dp_mst.c   |  4 +-
 4 files changed, 33 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 4556a57db7c02..aa55a81a9a9bf 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -453,15 +453,15 @@ int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
 /*
  * The required data bandwidth for a mode with given pixel clock and bpp. This
  * is the required net bandwidth independent of the data bandwidth efficiency.
- *
- * TODO: check if callers of this functions should use
- * intel_dp_effective_data_rate() instead.
  */
-int
-intel_dp_link_required(int pixel_clock, int bpp)
+int intel_dp_link_required(int link_clock, int lane_count,
+			   int mode_clock, int mode_hdisplay,
+			   int link_bpp_x16, unsigned long bw_overhead_flags)
 {
-	/* pixel_clock is in kHz, divide bpp by 8 for bit to Byte conversion */
-	return DIV_ROUND_UP(pixel_clock * bpp, 8);
+	int bw_overhead = intel_dp_link_bw_overhead(link_clock, lane_count, mode_hdisplay,
+						    0, link_bpp_x16, bw_overhead_flags);
+
+	return intel_dp_effective_data_rate(mode_clock, link_bpp_x16, bw_overhead);
 }
 
 /**
@@ -1531,7 +1531,9 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 	max_rate = intel_dp_max_link_data_rate(intel_dp, max_link_clock, max_lanes);
 
 	link_bpp_x16 = intel_dp_mode_min_link_bpp_x16(connector, mode);
-	mode_rate = intel_dp_link_required(target_clock, fxp_q4_to_int_roundup(link_bpp_x16));
+	mode_rate = intel_dp_link_required(max_link_clock, max_lanes,
+					   target_clock, mode->hdisplay,
+					   link_bpp_x16, 0);
 
 	if (intel_dp_has_dsc(connector)) {
 		int pipe_bpp;
@@ -1838,7 +1840,7 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 				  const struct link_config_limits *limits)
 {
 	int bpp, i, lane_count, clock = intel_dp_mode_clock(pipe_config, conn_state);
-	int mode_rate, link_rate, link_avail;
+	int link_rate, link_avail;
 
 	for (bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
 	     bpp >= fxp_q4_to_int(limits->link.min_bpp_x16);
@@ -1846,8 +1848,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 		int link_bpp_x16 =
 			intel_dp_output_format_link_bpp_x16(pipe_config->output_format, bpp);
 
-		mode_rate = intel_dp_link_required(clock, fxp_q4_to_int_roundup(link_bpp_x16));
-
 		for (i = 0; i < intel_dp->num_common_rates; i++) {
 			link_rate = intel_dp_common_rate(intel_dp, i);
 			if (link_rate < limits->min_rate ||
@@ -1857,11 +1857,17 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
 			for (lane_count = limits->min_lane_count;
 			     lane_count <= limits->max_lane_count;
 			     lane_count <<= 1) {
+				const struct drm_display_mode *adjusted_mode =
+					&pipe_config->hw.adjusted_mode;
+				int mode_rate =
+					intel_dp_link_required(link_rate, lane_count,
+							       clock, adjusted_mode->hdisplay,
+							       link_bpp_x16, 0);
+
 				link_avail = intel_dp_max_link_data_rate(intel_dp,
 									 link_rate,
 									 lane_count);
 
-
 				if (mode_rate <= link_avail) {
 					pipe_config->lane_count = lane_count;
 					pipe_config->pipe_bpp = bpp;
@@ -2724,11 +2730,13 @@ int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state)
 {
 	const struct drm_display_mode *adjusted_mode =
 		&crtc_state->hw.adjusted_mode;
-	int bpp = crtc_state->dsc.compression_enable ?
-		fxp_q4_to_int_roundup(crtc_state->dsc.compressed_bpp_x16) :
-		crtc_state->pipe_bpp;
+	int link_bpp_x16 = crtc_state->dsc.compression_enable ?
+		crtc_state->dsc.compressed_bpp_x16 :
+		fxp_q4_from_int(crtc_state->pipe_bpp);
 
-	return intel_dp_link_required(adjusted_mode->crtc_clock, bpp);
+	return intel_dp_link_required(crtc_state->port_clock, crtc_state->lane_count,
+				      adjusted_mode->crtc_clock, adjusted_mode->hdisplay,
+				      link_bpp_x16, 0);
 }
 
 bool intel_dp_joiner_needs_dsc(struct intel_display *display,
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index d7f9410129f49..30eebb8cad6d2 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -119,7 +119,9 @@ bool intel_dp_source_supports_tps4(struct intel_display *display);
 
 int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
 			      int dsc_slice_count, int bpp_x16, unsigned long flags);
-int intel_dp_link_required(int pixel_clock, int bpp);
+int intel_dp_link_required(int link_clock, int lane_count,
+			   int mode_clock, int mode_hdisplay,
+			   int link_bpp_x16, unsigned long bw_overhead_flags);
 int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
 				 int bw_overhead);
 int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
index aad5fe14962f9..54c585c59b900 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
@@ -1195,7 +1195,9 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
 		intel_panel_preferred_fixed_mode(intel_dp->attached_connector);
 	int mode_rate, max_rate;
 
-	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
+	mode_rate = intel_dp_link_required(link_rate, lane_count,
+					   fixed_mode->clock, fixed_mode->hdisplay,
+					   fxp_q4_from_int(18), 0);
 	max_rate = intel_dp_max_link_data_rate(intel_dp, link_rate, lane_count);
 	if (mode_rate > max_rate)
 		return false;
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index c1058b4a85d02..e4dd6b4ca0512 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1489,7 +1489,9 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
 
 	max_rate = intel_dp_max_link_data_rate(intel_dp,
 					       max_link_clock, max_lanes);
-	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
+	mode_rate = intel_dp_link_required(max_link_clock, max_lanes,
+					   mode->clock, mode->hdisplay,
+					   fxp_q4_from_int(min_bpp), 0);
 
 	/*
 	 * TODO:
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (7 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 08/50] drm/i915/dp: Use the effective data rate for DP BW calculation Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-10 12:50   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 10/50] drm/i915/dp: Account with MST, SSC BW overhead for uncompressed DP-MST stream BW Imre Deak
                   ` (45 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Use intel_dp_effective_data_rate() to calculate the required link BW for
compressed streams on non-UHBR DP-SST links. This ensures that the BW is
calculated the same way for all DP output types and DSC/non-DSC modes,
during mode validation as well as during state computation.

This approach also allows for accounting with BW overhead due to DSC,
FEC being enabled on a link. Acounting for these will be added by
follow-up changes.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 27 +++++++++++++++----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index aa55a81a9a9bf..4044bdbceaef5 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2025,15 +2025,19 @@ static bool intel_dp_dsc_supports_format(const struct intel_connector *connector
 	return drm_dp_dsc_sink_supports_format(connector->dp.dsc_dpcd, sink_dsc_format);
 }
 
-static bool is_bw_sufficient_for_dsc_config(int dsc_bpp_x16, u32 link_clock,
-					    u32 lane_count, u32 mode_clock,
-					    enum intel_output_format output_format,
-					    int timeslots)
+static bool is_bw_sufficient_for_dsc_config(struct intel_dp *intel_dp,
+					    int link_clock, int lane_count,
+					    int mode_clock, int mode_hdisplay,
+					    int dsc_slice_count, int link_bpp_x16,
+					    unsigned long bw_overhead_flags)
 {
-	u32 available_bw, required_bw;
+	int available_bw;
+	int required_bw;
 
-	available_bw = (link_clock * lane_count * timeslots * 16)  / 8;
-	required_bw = dsc_bpp_x16 * (intel_dp_mode_to_fec_clock(mode_clock));
+	available_bw = intel_dp_max_link_data_rate(intel_dp, link_clock, lane_count);
+	required_bw = intel_dp_link_required(link_clock, lane_count,
+					     mode_clock, mode_hdisplay,
+					     link_bpp_x16, bw_overhead_flags);
 
 	return available_bw >= required_bw;
 }
@@ -2081,11 +2085,12 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 				if (ret)
 					continue;
 			} else {
-				if (!is_bw_sufficient_for_dsc_config(dsc_bpp_x16, link_rate,
-								     lane_count,
+				if (!is_bw_sufficient_for_dsc_config(intel_dp,
+								     link_rate, lane_count,
 								     adjusted_mode->crtc_clock,
-								     pipe_config->output_format,
-								     timeslots))
+								     adjusted_mode->hdisplay,
+								     pipe_config->dsc.slice_count,
+								     dsc_bpp_x16, 0))
 					continue;
 			}
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 10/50] drm/i915/dp: Account with MST, SSC BW overhead for uncompressed DP-MST stream BW
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (8 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 09/50] drm/i915/dp: Use the effective data rate for DP compressed " Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-10 13:08   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 11/50] drm/i915/dp: Account with DSC BW overhead for compressed DP-SST " Imre Deak
                   ` (44 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

On MST links the symbol alignment and SSC have a BW overhead, which
should be accounted for when calculating the required stream BW, do so
during mode validation for an uncompressed stream.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index e4dd6b4ca0512..0db6ed2d9664c 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1458,6 +1458,8 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
 	const int min_bpp = 18;
 	int max_dotclk = display->cdclk.max_dotclk_freq;
 	int max_rate, mode_rate, max_lanes, max_link_clock;
+	unsigned long bw_overhead_flags =
+		DRM_DP_BW_OVERHEAD_MST | DRM_DP_BW_OVERHEAD_SSC_REF_CLK;
 	int ret;
 	bool dsc = false;
 	u16 dsc_max_compressed_bpp = 0;
@@ -1491,7 +1493,8 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
 					       max_link_clock, max_lanes);
 	mode_rate = intel_dp_link_required(max_link_clock, max_lanes,
 					   mode->clock, mode->hdisplay,
-					   fxp_q4_from_int(min_bpp), 0);
+					   fxp_q4_from_int(min_bpp),
+					   bw_overhead_flags);
 
 	/*
 	 * TODO:
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 11/50] drm/i915/dp: Account with DSC BW overhead for compressed DP-SST stream BW
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (9 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 10/50] drm/i915/dp: Account with MST, SSC BW overhead for uncompressed DP-MST stream BW Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-10 13:39   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 12/50] drm/i915/dp: Account with pipe joiner max compressed BPP limit for DP-MST and eDP Imre Deak
                   ` (43 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

A DSC compressed stream requires FEC (except for eDP), which has a BW
overhead on non-UHBR links that must be accounted for explicitly. Do
that during computing the required BW.

Note that the overhead doesn't need to be accounted for on UHBR links
where FEC is always enabled and so the corresponding overhead is part of
the channel coding efficiency instead (i.e. the overhead is part of the
available vs. the required BW).

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 4044bdbceaef5..55be648283b19 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2085,12 +2085,16 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 				if (ret)
 					continue;
 			} else {
+				unsigned long bw_overhead_flags =
+					pipe_config->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
+
 				if (!is_bw_sufficient_for_dsc_config(intel_dp,
 								     link_rate, lane_count,
 								     adjusted_mode->crtc_clock,
 								     adjusted_mode->hdisplay,
 								     pipe_config->dsc.slice_count,
-								     dsc_bpp_x16, 0))
+								     dsc_bpp_x16,
+								     bw_overhead_flags))
 					continue;
 			}
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 12/50] drm/i915/dp: Account with pipe joiner max compressed BPP limit for DP-MST and eDP
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (10 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 11/50] drm/i915/dp: Account with DSC BW overhead for compressed DP-SST " Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-10 14:29   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 13/50] drm/i915/dp: Drop unused timeslots param from dsc_compute_link_config() Imre Deak
                   ` (42 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The pipe joiner maximum compressed BPP must be limited based on the pipe
joiner memory size and BW, do that for all DP outputs by adjusting the
max compressed BPP value already in
intel_dp_compute_config_link_bpp_limits() (which is used by all output
types).

This way the BPP doesn't need to be adjusted in
dsc_compute_compressed_bpp() (called for DP-SST after the above limits
were computed already), so remove the adjustment from there.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 55be648283b19..def1f869febc2 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2245,19 +2245,12 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 {
 	struct intel_display *display = to_intel_display(intel_dp);
 	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
-	const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
 	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
-	int dsc_joiner_max_bpp;
-	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
 	int link_bpp_x16;
 	int bpp_x16;
 	int ret;
 
-	dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, adjusted_mode->crtc_clock,
-								adjusted_mode->hdisplay,
-								num_joined_pipes);
-	max_bpp_x16 = min(fxp_q4_from_int(dsc_joiner_max_bpp), limits->link.max_bpp_x16);
-
+	max_bpp_x16 = limits->link.max_bpp_x16;
 	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
 
 	/* Compressed BPP should be less than the Input DSC bpp */
@@ -2613,6 +2606,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 		int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
 		int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
 		int throughput_max_bpp_x16;
+		int joiner_max_bpp;
 
 		dsc_src_min_bpp = intel_dp_dsc_min_src_compressed_bpp();
 		dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(crtc_state);
@@ -2620,11 +2614,17 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 		limits->link.min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
 
 		dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
+		joiner_max_bpp =
+			get_max_compressed_bpp_with_joiner(display,
+							   adjusted_mode->crtc_clock,
+							   adjusted_mode->hdisplay,
+							   intel_crtc_num_joined_pipes(crtc_state));
 		dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
 									crtc_state,
 									limits->pipe.max_bpp / 3);
 		dsc_max_bpp = dsc_sink_max_bpp ?
 			      min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp;
+		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
 
 		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 13/50] drm/i915/dp: Drop unused timeslots param from dsc_compute_link_config()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (11 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 12/50] drm/i915/dp: Account with pipe joiner max compressed BPP limit for DP-MST and eDP Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-10 14:31   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp() Imre Deak
                   ` (41 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Drop the unused timeslots parameter from dsc_compute_link_config() and
other functions calling it.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 18 +++++++-----------
 1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index def1f869febc2..000fccc39a292 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2046,8 +2046,7 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 				   struct intel_crtc_state *pipe_config,
 				   struct drm_connector_state *conn_state,
 				   const struct link_config_limits *limits,
-				   int dsc_bpp_x16,
-				   int timeslots)
+				   int dsc_bpp_x16)
 {
 	const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
 	int link_rate, lane_count;
@@ -2240,8 +2239,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 				      struct intel_crtc_state *pipe_config,
 				      struct drm_connector_state *conn_state,
 				      const struct link_config_limits *limits,
-				      int pipe_bpp,
-				      int timeslots)
+				      int pipe_bpp)
 {
 	struct intel_display *display = to_intel_display(intel_dp);
 	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
@@ -2269,8 +2267,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 					      pipe_config,
 					      conn_state,
 					      limits,
-					      bpp_x16,
-					      timeslots);
+					      bpp_x16);
 		if (ret == 0) {
 			pipe_config->dsc.compressed_bpp_x16 = bpp_x16;
 			if (intel_dp->force_dsc_fractional_bpp_en &&
@@ -2327,8 +2324,7 @@ int intel_dp_force_dsc_pipe_bpp(struct intel_dp *intel_dp,
 static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 					 struct intel_crtc_state *pipe_config,
 					 struct drm_connector_state *conn_state,
-					 const struct link_config_limits *limits,
-					 int timeslots)
+					 const struct link_config_limits *limits)
 {
 	const struct intel_connector *connector =
 		to_intel_connector(conn_state->connector);
@@ -2340,7 +2336,7 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 
 	if (forced_bpp) {
 		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
-						 limits, forced_bpp, timeslots);
+						 limits, forced_bpp);
 		if (ret == 0) {
 			pipe_config->pipe_bpp = forced_bpp;
 			return 0;
@@ -2358,7 +2354,7 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 			continue;
 
 		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
-						 limits, pipe_bpp, timeslots);
+						 limits, pipe_bpp);
 		if (ret == 0) {
 			pipe_config->pipe_bpp = pipe_bpp;
 			return 0;
@@ -2469,7 +2465,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 							     conn_state, limits);
 		else
 			ret = intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
-							    conn_state, limits, timeslots);
+							    conn_state, limits);
 		if (ret) {
 			drm_dbg_kms(display->drm,
 				    "No Valid pipe bpp for given mode ret = %d\n", ret);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (12 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 13/50] drm/i915/dp: Drop unused timeslots param from dsc_compute_link_config() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12 15:41   ` Govindapillai, Vinod
  2025-12-15  7:46   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16() Imre Deak
                   ` (40 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Factor out align_max_sink_dsc_input_bpp(), also used later for computing
the maximum DSC input BPP limit.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++---------
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 000fccc39a292..dcb9bc11e677b 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1893,12 +1893,27 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
 	return intel_dp_dsc_min_src_input_bpc();
 }
 
+static int align_max_sink_dsc_input_bpp(const struct intel_connector *connector,
+					int max_pipe_bpp)
+{
+	u8 dsc_bpc[3];
+	int num_bpc;
+	int i;
+
+	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
+						       dsc_bpc);
+	for (i = 0; i < num_bpc; i++) {
+		if (dsc_bpc[i] * 3 <= max_pipe_bpp)
+			return dsc_bpc[i] * 3;
+	}
+
+	return 0;
+}
+
 int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
 				 u8 max_req_bpc)
 {
 	struct intel_display *display = to_intel_display(connector);
-	int i, num_bpc;
-	u8 dsc_bpc[3] = {};
 	int dsc_max_bpc;
 
 	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
@@ -1908,14 +1923,7 @@ int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
 
 	dsc_max_bpc = min(dsc_max_bpc, max_req_bpc);
 
-	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
-						       dsc_bpc);
-	for (i = 0; i < num_bpc; i++) {
-		if (dsc_max_bpc >= dsc_bpc[i])
-			return dsc_bpc[i] * 3;
-	}
-
-	return 0;
+	return align_max_sink_dsc_input_bpp(connector, dsc_max_bpc * 3);
 }
 
 static int intel_dp_source_dsc_version_minor(struct intel_display *display)
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (13 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12 15:46   ` Govindapillai, Vinod
  2025-12-15  7:49   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values Imre Deak
                   ` (39 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Factor out align_max_vesa_compressed_bpp_x16(), also used later for
computing the maximum DSC compressed BPP limit.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 35 ++++++++++++++-----------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index dcb9bc11e677b..3111758578d6c 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -867,10 +867,23 @@ small_joiner_ram_size_bits(struct intel_display *display)
 		return 6144 * 8;
 }
 
+static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
+{
+	int i;
+
+	for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
+		int vesa_bpp_x16 = fxp_q4_from_int(valid_dsc_bpp[i]);
+
+		if (vesa_bpp_x16 <= max_link_bpp_x16)
+			return vesa_bpp_x16;
+	}
+
+	return 0;
+}
+
 static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp, u32 pipe_bpp)
 {
 	u32 bits_per_pixel = bpp;
-	int i;
 
 	/* Error out if the max bpp is less than smallest allowed valid bpp */
 	if (bits_per_pixel < valid_dsc_bpp[0]) {
@@ -899,15 +912,13 @@ static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp
 		}
 		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
 	} else {
+		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
+
 		/* Find the nearest match in the array of known BPPs from VESA */
-		for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
-			if (bits_per_pixel < valid_dsc_bpp[i + 1])
-				break;
-		}
-		drm_dbg_kms(display->drm, "Set dsc bpp from %d to VESA %d\n",
-			    bits_per_pixel, valid_dsc_bpp[i]);
+		link_bpp_x16 = align_max_vesa_compressed_bpp_x16(link_bpp_x16);
 
-		bits_per_pixel = valid_dsc_bpp[i];
+		drm_WARN_ON(display->drm, fxp_q4_to_frac(link_bpp_x16));
+		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
 	}
 
 	return bits_per_pixel;
@@ -2219,7 +2230,6 @@ int intel_dp_dsc_bpp_step_x16(const struct intel_connector *connector)
 bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
 {
 	struct intel_display *display = to_intel_display(intel_dp);
-	int i;
 
 	if (DISPLAY_VER(display) >= 13) {
 		if (intel_dp->force_dsc_fractional_bpp_en && !fxp_q4_to_frac(bpp_x16))
@@ -2231,12 +2241,7 @@ bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
 	if (fxp_q4_to_frac(bpp_x16))
 		return false;
 
-	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
-		if (fxp_q4_to_int(bpp_x16) == valid_dsc_bpp[i])
-			return true;
-	}
-
-	return false;
+	return align_max_vesa_compressed_bpp_x16(bpp_x16) == bpp_x16;
 }
 
 /*
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (14 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12 15:48   ` Govindapillai, Vinod
  2025-12-15  7:51   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value Imre Deak
                   ` (38 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Make sure that state computation fails if the minimum/maximum link BPP
values got invalid as a result of limiting both of these values
separately to the corresponding source/sink capability limits.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 3111758578d6c..545d872a30403 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2654,7 +2654,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 	limits->link.max_bpp_x16 = max_link_bpp_x16;
 
 	drm_dbg_kms(display->drm,
-		    "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits: pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d max link_bpp " FXP_Q4_FMT "\n",
+		    "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits: pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d min link_bpp " FXP_Q4_FMT " max link_bpp " FXP_Q4_FMT "\n",
 		    encoder->base.base.id, encoder->base.name,
 		    crtc->base.base.id, crtc->base.name,
 		    adjusted_mode->crtc_clock,
@@ -2662,8 +2662,13 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 		    limits->max_lane_count,
 		    limits->max_rate,
 		    limits->pipe.max_bpp,
+		    FXP_Q4_ARGS(limits->link.min_bpp_x16),
 		    FXP_Q4_ARGS(limits->link.max_bpp_x16));
 
+	if (limits->link.min_bpp_x16 <= 0 ||
+	    limits->link.min_bpp_x16 > limits->link.max_bpp_x16)
+		return false;
+
 	return true;
 }
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (15 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12 15:51   ` Govindapillai, Vinod
  2025-12-15  7:51   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed " Imre Deak
                   ` (37 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

There is no reason to accept a minimum/maximum link BPP value above the
maximum throughput BPP value, fail the state computation in this case.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 545d872a30403..f97ee8265836a 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2638,8 +2638,6 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
 
 		throughput_max_bpp_x16 = dsc_throughput_quirk_max_bpp_x16(connector, crtc_state);
-		throughput_max_bpp_x16 = clamp(throughput_max_bpp_x16,
-					       limits->link.min_bpp_x16, max_link_bpp_x16);
 		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
 			max_link_bpp_x16 = throughput_max_bpp_x16;
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed BPP value
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (16 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12 15:52   ` Govindapillai, Vinod
  2025-12-15  7:52   ` Luca Coelho
  2025-11-27 17:49 ` [PATCH 19/50] drm/i915/dp: Fail state computation for invalid DSC source input BPP values Imre Deak
                   ` (36 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

There is no reason to accept an invalid maximum sink compressed BPP
value (i.e. 0), fail the state computation in this case.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index f97ee8265836a..db7e49c17ca8d 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2631,8 +2631,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 		dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
 									crtc_state,
 									limits->pipe.max_bpp / 3);
-		dsc_max_bpp = dsc_sink_max_bpp ?
-			      min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp;
+		dsc_max_bpp = min(dsc_sink_max_bpp, dsc_src_max_bpp);
 		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
 
 		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 19/50] drm/i915/dp: Fail state computation for invalid DSC source input BPP values
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (17 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed " Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-11  8:29   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 20/50] drm/i915/dp: Align min/max DSC input BPPs to sink caps Imre Deak
                   ` (35 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

There is no reason to accept an invalid minimum/maximum DSC source input
BPP value (i.e a minimum DSC input BPP value above the maximum pipe BPP
or a maximum DSC input BPP value below the minimum pipe BPP value), fail
the state computation in these cases.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index db7e49c17ca8d..1ef64b90492ea 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2669,16 +2669,30 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 	return true;
 }
 
-static void
-intel_dp_dsc_compute_pipe_bpp_limits(struct intel_dp *intel_dp,
+static bool
+intel_dp_dsc_compute_pipe_bpp_limits(struct intel_connector *connector,
 				     struct link_config_limits *limits)
 {
-	struct intel_display *display = to_intel_display(intel_dp);
+	struct intel_display *display = to_intel_display(connector);
+	const struct link_config_limits orig_limits = *limits;
 	int dsc_min_bpc = intel_dp_dsc_min_src_input_bpc();
 	int dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
 
-	limits->pipe.max_bpp = clamp(limits->pipe.max_bpp, dsc_min_bpc * 3, dsc_max_bpc * 3);
-	limits->pipe.min_bpp = clamp(limits->pipe.min_bpp, dsc_min_bpc * 3, dsc_max_bpc * 3);
+	limits->pipe.min_bpp = max(limits->pipe.min_bpp, dsc_min_bpc * 3);
+	limits->pipe.max_bpp = min(limits->pipe.max_bpp, dsc_max_bpc * 3);
+
+	if (limits->pipe.min_bpp <= 0 ||
+	    limits->pipe.min_bpp > limits->pipe.max_bpp) {
+		drm_dbg_kms(display->drm,
+			    "[CONNECTOR:%d:%s] Invalid DSC src/sink input BPP (src:%d-%d pipe:%d-%d)\n",
+			    connector->base.base.id, connector->base.name,
+			    dsc_min_bpc * 3, dsc_max_bpc * 3,
+			    orig_limits.pipe.min_bpp, orig_limits.pipe.max_bpp);
+
+		return false;
+	}
+
+	return true;
 }
 
 bool
@@ -2718,8 +2732,8 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
 							respect_downstream_limits);
 	}
 
-	if (dsc)
-		intel_dp_dsc_compute_pipe_bpp_limits(intel_dp, limits);
+	if (dsc && !intel_dp_dsc_compute_pipe_bpp_limits(connector, limits))
+		return false;
 
 	if (is_mst || intel_dp->use_max_params) {
 		/*
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 20/50] drm/i915/dp: Align min/max DSC input BPPs to sink caps
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (18 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 19/50] drm/i915/dp: Fail state computation for invalid DSC source input BPP values Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-11  8:51   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits Imre Deak
                   ` (34 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Align the minimum/maximum DSC input BPPs to the corresponding sink DSC
input BPP capability limits already when computing the BPP limits. This
alignment is also performed later during state computation, however
there is no reason to initialize the limits to an unaligned/incorrect
value.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 1ef64b90492ea..e7a42c9e4fef1 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1904,6 +1904,23 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
 	return intel_dp_dsc_min_src_input_bpc();
 }
 
+static int align_min_sink_dsc_input_bpp(const struct intel_connector *connector,
+					int min_pipe_bpp)
+{
+	u8 dsc_bpc[3];
+	int num_bpc;
+	int i;
+
+	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
+						       dsc_bpc);
+	for (i = num_bpc - 1; i >= 0; i--) {
+		if (dsc_bpc[i] * 3 >= min_pipe_bpp)
+			return dsc_bpc[i] * 3;
+	}
+
+	return 0;
+}
+
 static int align_max_sink_dsc_input_bpp(const struct intel_connector *connector,
 					int max_pipe_bpp)
 {
@@ -2679,15 +2696,19 @@ intel_dp_dsc_compute_pipe_bpp_limits(struct intel_connector *connector,
 	int dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
 
 	limits->pipe.min_bpp = max(limits->pipe.min_bpp, dsc_min_bpc * 3);
+	limits->pipe.min_bpp = align_min_sink_dsc_input_bpp(connector, limits->pipe.min_bpp);
+
 	limits->pipe.max_bpp = min(limits->pipe.max_bpp, dsc_max_bpc * 3);
+	limits->pipe.max_bpp = align_max_sink_dsc_input_bpp(connector, limits->pipe.max_bpp);
 
 	if (limits->pipe.min_bpp <= 0 ||
 	    limits->pipe.min_bpp > limits->pipe.max_bpp) {
 		drm_dbg_kms(display->drm,
-			    "[CONNECTOR:%d:%s] Invalid DSC src/sink input BPP (src:%d-%d pipe:%d-%d)\n",
+			    "[CONNECTOR:%d:%s] Invalid DSC src/sink input BPP (src:%d-%d pipe:%d-%d sink-align:%d-%d)\n",
 			    connector->base.base.id, connector->base.name,
 			    dsc_min_bpc * 3, dsc_max_bpc * 3,
-			    orig_limits.pipe.min_bpp, orig_limits.pipe.max_bpp);
+			    orig_limits.pipe.min_bpp, orig_limits.pipe.max_bpp,
+			    limits->pipe.min_bpp, limits->pipe.max_bpp);
 
 		return false;
 	}
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (19 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 20/50] drm/i915/dp: Align min/max DSC input BPPs to sink caps Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12  9:17   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 22/50] drm/i915/dp: Drop intel_dp parameter from intel_dp_compute_config_link_bpp_limits() Imre Deak
                   ` (33 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Align the minimum/maximum DSC compressed BPPs to the corresponding
source compressed BPP limits already when computing the BPP limits. This
alignment is also performed later during state computation, however
there is no reason to initialize the limits to an unaligned/incorrect
value.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 57 +++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index e7a42c9e4fef1..801e8fd6b229e 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -867,6 +867,20 @@ small_joiner_ram_size_bits(struct intel_display *display)
 		return 6144 * 8;
 }
 
+static int align_min_vesa_compressed_bpp_x16(int min_link_bpp_x16)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
+		int vesa_bpp_x16 = fxp_q4_from_int(valid_dsc_bpp[i]);
+
+		if (vesa_bpp_x16 >= min_link_bpp_x16)
+			return vesa_bpp_x16;
+	}
+
+	return 0;
+}
+
 static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
 {
 	int i;
@@ -2261,6 +2275,40 @@ bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
 	return align_max_vesa_compressed_bpp_x16(bpp_x16) == bpp_x16;
 }
 
+static int align_min_compressed_bpp_x16(const struct intel_connector *connector, int min_bpp_x16)
+{
+	struct intel_display *display = to_intel_display(connector);
+
+	if (DISPLAY_VER(display) >= 13) {
+		int bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
+
+		drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
+
+		return round_up(min_bpp_x16, bpp_step_x16);
+	} else {
+		return align_min_vesa_compressed_bpp_x16(min_bpp_x16);
+	}
+}
+
+static int align_max_compressed_bpp_x16(const struct intel_connector *connector,
+					enum intel_output_format output_format,
+					int pipe_bpp, int max_bpp_x16)
+{
+	struct intel_display *display = to_intel_display(connector);
+	int link_bpp_x16 = intel_dp_output_format_link_bpp_x16(output_format, pipe_bpp);
+	int bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
+
+	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
+
+	if (DISPLAY_VER(display) >= 13) {
+		drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
+
+		return round_down(max_bpp_x16, bpp_step_x16);
+	} else {
+		return align_max_vesa_compressed_bpp_x16(max_bpp_x16);
+	}
+}
+
 /*
  * Find the max compressed BPP we can find a link configuration for. The BPPs to
  * try depend on the source (platform) and sink.
@@ -2639,6 +2687,9 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 		dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
 		limits->link.min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
 
+		limits->link.min_bpp_x16 =
+			align_min_compressed_bpp_x16(connector, limits->link.min_bpp_x16);
+
 		dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
 		joiner_max_bpp =
 			get_max_compressed_bpp_with_joiner(display,
@@ -2663,6 +2714,12 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
 				    connector->base.base.id, connector->base.name,
 				    FXP_Q4_ARGS(max_link_bpp_x16));
 		}
+
+		max_link_bpp_x16 =
+			align_max_compressed_bpp_x16(connector,
+						     crtc_state->output_format,
+						     limits->pipe.max_bpp,
+						     max_link_bpp_x16);
 	}
 
 	limits->link.max_bpp_x16 = max_link_bpp_x16;
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 22/50] drm/i915/dp: Drop intel_dp parameter from intel_dp_compute_config_link_bpp_limits()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (20 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12  9:23   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 23/50] drm/i915/dp: Pass intel_output_format to intel_dp_dsc_sink_{min_max}_compressed_bpp() Imre Deak
                   ` (32 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The intel_dp pointer can be deducted from the connector pointer, so it's
enough to pass only connector to
intel_dp_compute_config_link_bpp_limits(), do so.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 801e8fd6b229e..5ad71e697e585 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2653,13 +2653,13 @@ dsc_throughput_quirk_max_bpp_x16(const struct intel_connector *connector,
  * range, crtc_state and dsc mode. Return true on success.
  */
 static bool
-intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
-					const struct intel_connector *connector,
+intel_dp_compute_config_link_bpp_limits(struct intel_connector *connector,
 					const struct intel_crtc_state *crtc_state,
 					bool dsc,
 					struct link_config_limits *limits)
 {
-	struct intel_display *display = to_intel_display(intel_dp);
+	struct intel_display *display = to_intel_display(connector);
+	struct intel_dp *intel_dp = intel_attached_dp(connector);
 	const struct drm_display_mode *adjusted_mode =
 		&crtc_state->hw.adjusted_mode;
 	const struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
@@ -2831,8 +2831,7 @@ intel_dp_compute_config_limits(struct intel_dp *intel_dp,
 
 	intel_dp_test_compute_config(intel_dp, crtc_state, limits);
 
-	return intel_dp_compute_config_link_bpp_limits(intel_dp,
-						       connector,
+	return intel_dp_compute_config_link_bpp_limits(connector,
 						       crtc_state,
 						       dsc,
 						       limits);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 23/50] drm/i915/dp: Pass intel_output_format to intel_dp_dsc_sink_{min_max}_compressed_bpp()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (21 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 22/50] drm/i915/dp: Drop intel_dp parameter from intel_dp_compute_config_link_bpp_limits() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12  9:27   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 24/50] drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16() Imre Deak
                   ` (31 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Prepare for follow-up changes also calling
intel_dp_dsc_min_sink_compressed_bpp() /
intel_dp_dsc_max_sink_compressed_bpp_x16()
without an intel_crtc_state.

While at it remove the stale function declarations from the header file.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 24 ++++++++++++------------
 drivers/gpu/drm/i915/display/intel_dp.h |  4 ----
 2 files changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 5ad71e697e585..54a037fcf5111 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2156,7 +2156,7 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 
 static
 u16 intel_dp_dsc_max_sink_compressed_bppx16(const struct intel_connector *connector,
-					    const struct intel_crtc_state *pipe_config,
+					    enum intel_output_format output_format,
 					    int bpc)
 {
 	u16 max_bppx16 = drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd);
@@ -2167,43 +2167,43 @@ u16 intel_dp_dsc_max_sink_compressed_bppx16(const struct intel_connector *connec
 	 * If support not given in DPCD 67h, 68h use the Maximum Allowed bit rate
 	 * values as given in spec Table 2-157 DP v2.0
 	 */
-	switch (pipe_config->output_format) {
+	switch (output_format) {
 	case INTEL_OUTPUT_FORMAT_RGB:
 	case INTEL_OUTPUT_FORMAT_YCBCR444:
 		return (3 * bpc) << 4;
 	case INTEL_OUTPUT_FORMAT_YCBCR420:
 		return (3 * (bpc / 2)) << 4;
 	default:
-		MISSING_CASE(pipe_config->output_format);
+		MISSING_CASE(output_format);
 		break;
 	}
 
 	return 0;
 }
 
-int intel_dp_dsc_sink_min_compressed_bpp(const struct intel_crtc_state *pipe_config)
+static int intel_dp_dsc_sink_min_compressed_bpp(enum intel_output_format output_format)
 {
 	/* From Mandatory bit rate range Support Table 2-157 (DP v2.0) */
-	switch (pipe_config->output_format) {
+	switch (output_format) {
 	case INTEL_OUTPUT_FORMAT_RGB:
 	case INTEL_OUTPUT_FORMAT_YCBCR444:
 		return 8;
 	case INTEL_OUTPUT_FORMAT_YCBCR420:
 		return 6;
 	default:
-		MISSING_CASE(pipe_config->output_format);
+		MISSING_CASE(output_format);
 		break;
 	}
 
 	return 0;
 }
 
-int intel_dp_dsc_sink_max_compressed_bpp(const struct intel_connector *connector,
-					 const struct intel_crtc_state *pipe_config,
-					 int bpc)
+static int intel_dp_dsc_sink_max_compressed_bpp(const struct intel_connector *connector,
+						enum intel_output_format output_format,
+						int bpc)
 {
 	return intel_dp_dsc_max_sink_compressed_bppx16(connector,
-						       pipe_config, bpc) >> 4;
+						       output_format, bpc) >> 4;
 }
 
 int intel_dp_dsc_min_src_compressed_bpp(void)
@@ -2683,7 +2683,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_connector *connector,
 		int joiner_max_bpp;
 
 		dsc_src_min_bpp = intel_dp_dsc_min_src_compressed_bpp();
-		dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(crtc_state);
+		dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(crtc_state->output_format);
 		dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
 		limits->link.min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
 
@@ -2697,7 +2697,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_connector *connector,
 							   adjusted_mode->hdisplay,
 							   intel_crtc_num_joined_pipes(crtc_state));
 		dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
-									crtc_state,
+									crtc_state->output_format,
 									limits->pipe.max_bpp / 3);
 		dsc_max_bpp = min(dsc_sink_max_bpp, dsc_src_max_bpp);
 		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 30eebb8cad6d2..489b8c945da39 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -150,10 +150,6 @@ u16 intel_dp_dsc_get_max_compressed_bpp(struct intel_display *display,
 					enum intel_output_format output_format,
 					u32 pipe_bpp,
 					u32 timeslots);
-int intel_dp_dsc_sink_min_compressed_bpp(const struct intel_crtc_state *pipe_config);
-int intel_dp_dsc_sink_max_compressed_bpp(const struct intel_connector *connector,
-					 const struct intel_crtc_state *pipe_config,
-					 int bpc);
 bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16);
 u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 				int mode_clock, int mode_hdisplay,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 24/50] drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (22 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 23/50] drm/i915/dp: Pass intel_output_format to intel_dp_dsc_sink_{min_max}_compressed_bpp() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12  9:31   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16() Imre Deak
                   ` (30 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Prepare for follow-up changes using dsc_throughput_quirk_max_bpp_x16()
without an intel_crtc_state pointer.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 54a037fcf5111..193d9c2079347 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2616,11 +2616,8 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 
 static int
 dsc_throughput_quirk_max_bpp_x16(const struct intel_connector *connector,
-				 const struct intel_crtc_state *crtc_state)
+				 int mode_clock)
 {
-	const struct drm_display_mode *adjusted_mode =
-		&crtc_state->hw.adjusted_mode;
-
 	if (!connector->dp.dsc_throughput_quirk)
 		return INT_MAX;
 
@@ -2640,7 +2637,7 @@ dsc_throughput_quirk_max_bpp_x16(const struct intel_connector *connector,
 	 * smaller than the YUV422/420 value, but let's not depend on this
 	 * assumption.
 	 */
-	if (adjusted_mode->crtc_clock <
+	if (mode_clock <
 	    min(connector->dp.dsc_branch_caps.overall_throughput.rgb_yuv444,
 		connector->dp.dsc_branch_caps.overall_throughput.yuv422_420) / 2)
 		return INT_MAX;
@@ -2704,7 +2701,8 @@ intel_dp_compute_config_link_bpp_limits(struct intel_connector *connector,
 
 		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
 
-		throughput_max_bpp_x16 = dsc_throughput_quirk_max_bpp_x16(connector, crtc_state);
+		throughput_max_bpp_x16 =
+			dsc_throughput_quirk_max_bpp_x16(connector, adjusted_mode->crtc_clock);
 		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
 			max_link_bpp_x16 = throughput_max_bpp_x16;
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (23 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 24/50] drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12  9:39   ` Govindapillai, Vinod
  2025-11-27 17:49 ` [PATCH 26/50] drm/i915/dp: Factor out compute_max_compressed_bpp_x16() Imre Deak
                   ` (29 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Factor out compute_min_compressed_bpp_x16() also used during mode
validation in a follow-up change.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 26 +++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 193d9c2079347..2a5f5f1b4b128 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2645,6 +2645,23 @@ dsc_throughput_quirk_max_bpp_x16(const struct intel_connector *connector,
 	return fxp_q4_from_int(12);
 }
 
+static int compute_min_compressed_bpp_x16(struct intel_connector *connector,
+					  enum intel_output_format output_format)
+{
+	int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
+	int min_bpp_x16;
+
+	dsc_src_min_bpp = intel_dp_dsc_min_src_compressed_bpp();
+	dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(output_format);
+	dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
+
+	min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
+
+	min_bpp_x16 = align_min_compressed_bpp_x16(connector, min_bpp_x16);
+
+	return min_bpp_x16;
+}
+
 /*
  * Calculate the output link min, max bpp values in limits based on the pipe bpp
  * range, crtc_state and dsc mode. Return true on success.
@@ -2674,18 +2691,11 @@ intel_dp_compute_config_link_bpp_limits(struct intel_connector *connector,
 
 		limits->link.min_bpp_x16 = fxp_q4_from_int(limits->pipe.min_bpp);
 	} else {
-		int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
 		int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
 		int throughput_max_bpp_x16;
 		int joiner_max_bpp;
-
-		dsc_src_min_bpp = intel_dp_dsc_min_src_compressed_bpp();
-		dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(crtc_state->output_format);
-		dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
-		limits->link.min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
-
 		limits->link.min_bpp_x16 =
-			align_min_compressed_bpp_x16(connector, limits->link.min_bpp_x16);
+			compute_min_compressed_bpp_x16(connector, crtc_state->output_format);
 
 		dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
 		joiner_max_bpp =
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 26/50] drm/i915/dp: Factor out compute_max_compressed_bpp_x16()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (24 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16() Imre Deak
@ 2025-11-27 17:49 ` Imre Deak
  2025-12-12  9:50   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 27/50] drm/i915/dp: Add intel_dp_mode_valid_with_dsc() Imre Deak
                   ` (28 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:49 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Factor out compute_max_compressed_bpp_x16() also used during mode
validation in a follow-up change.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 82 +++++++++++++++----------
 1 file changed, 49 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 2a5f5f1b4b128..9deb99eda8813 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2662,6 +2662,48 @@ static int compute_min_compressed_bpp_x16(struct intel_connector *connector,
 	return min_bpp_x16;
 }
 
+static int compute_max_compressed_bpp_x16(struct intel_connector *connector,
+					  int mode_clock, int mode_hdisplay,
+					  int num_joined_pipes,
+					  enum intel_output_format output_format,
+					  int pipe_max_bpp, int max_link_bpp_x16)
+{
+	struct intel_display *display = to_intel_display(connector);
+	struct intel_dp *intel_dp = intel_attached_dp(connector);
+	int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
+	int throughput_max_bpp_x16;
+	int joiner_max_bpp;
+
+	dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
+	joiner_max_bpp = get_max_compressed_bpp_with_joiner(display,
+							    mode_clock,
+							    mode_hdisplay,
+							    num_joined_pipes);
+	dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
+								output_format,
+								pipe_max_bpp / 3);
+	dsc_max_bpp = min(dsc_sink_max_bpp, dsc_src_max_bpp);
+	dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
+
+	max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
+
+	throughput_max_bpp_x16 = dsc_throughput_quirk_max_bpp_x16(connector,
+								  mode_clock);
+	if (throughput_max_bpp_x16 < max_link_bpp_x16) {
+		max_link_bpp_x16 = throughput_max_bpp_x16;
+
+		drm_dbg_kms(display->drm,
+			    "[CONNECTOR:%d:%s] Decreasing link max bpp to " FXP_Q4_FMT " due to DSC throughput quirk\n",
+			    connector->base.base.id, connector->base.name,
+			    FXP_Q4_ARGS(max_link_bpp_x16));
+	}
+
+	max_link_bpp_x16 = align_max_compressed_bpp_x16(connector, output_format,
+							pipe_max_bpp, max_link_bpp_x16);
+
+	return max_link_bpp_x16;
+}
+
 /*
  * Calculate the output link min, max bpp values in limits based on the pipe bpp
  * range, crtc_state and dsc mode. Return true on success.
@@ -2691,43 +2733,17 @@ intel_dp_compute_config_link_bpp_limits(struct intel_connector *connector,
 
 		limits->link.min_bpp_x16 = fxp_q4_from_int(limits->pipe.min_bpp);
 	} else {
-		int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
-		int throughput_max_bpp_x16;
-		int joiner_max_bpp;
 		limits->link.min_bpp_x16 =
 			compute_min_compressed_bpp_x16(connector, crtc_state->output_format);
 
-		dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
-		joiner_max_bpp =
-			get_max_compressed_bpp_with_joiner(display,
-							   adjusted_mode->crtc_clock,
-							   adjusted_mode->hdisplay,
-							   intel_crtc_num_joined_pipes(crtc_state));
-		dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
-									crtc_state->output_format,
-									limits->pipe.max_bpp / 3);
-		dsc_max_bpp = min(dsc_sink_max_bpp, dsc_src_max_bpp);
-		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
-
-		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
-
-		throughput_max_bpp_x16 =
-			dsc_throughput_quirk_max_bpp_x16(connector, adjusted_mode->crtc_clock);
-		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
-			max_link_bpp_x16 = throughput_max_bpp_x16;
-
-			drm_dbg_kms(display->drm,
-				    "[CRTC:%d:%s][CONNECTOR:%d:%s] Decreasing link max bpp to " FXP_Q4_FMT " due to DSC throughput quirk\n",
-				    crtc->base.base.id, crtc->base.name,
-				    connector->base.base.id, connector->base.name,
-				    FXP_Q4_ARGS(max_link_bpp_x16));
-		}
-
 		max_link_bpp_x16 =
-			align_max_compressed_bpp_x16(connector,
-						     crtc_state->output_format,
-						     limits->pipe.max_bpp,
-						     max_link_bpp_x16);
+			compute_max_compressed_bpp_x16(connector,
+						       adjusted_mode->crtc_clock,
+						       adjusted_mode->hdisplay,
+						       intel_crtc_num_joined_pipes(crtc_state),
+						       crtc_state->output_format,
+						       limits->pipe.max_bpp,
+						       max_link_bpp_x16);
 	}
 
 	limits->link.max_bpp_x16 = max_link_bpp_x16;
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 27/50] drm/i915/dp: Add intel_dp_mode_valid_with_dsc()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (25 preceding siblings ...)
  2025-11-27 17:49 ` [PATCH 26/50] drm/i915/dp: Factor out compute_max_compressed_bpp_x16() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 11:43   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 28/50] drm/i915/dp: Unify detect and compute time DSC mode BW validation Imre Deak
                   ` (27 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Add intel_dp_mode_valid_with_dsc() and call this for an SST/MST mode
validation to prepare for a follow-up change using a way to verify the
mode's required BW the same way this is done elsewhere during state
computation (which in turn depends on the mode's effective data rate
with the corresponding BW overhead).

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c     | 57 +++++++++++++++------
 drivers/gpu/drm/i915/display/intel_dp.h     |  7 +++
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 29 ++++-------
 3 files changed, 57 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 9deb99eda8813..b40edf4febcb7 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1579,24 +1579,20 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 			dsc_slice_count =
 				drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
 								true);
+			dsc = dsc_max_compressed_bpp && dsc_slice_count;
 		} else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
-			dsc_max_compressed_bpp =
-				intel_dp_dsc_get_max_compressed_bpp(display,
-								    max_link_clock,
-								    max_lanes,
-								    target_clock,
-								    mode->hdisplay,
-								    num_joined_pipes,
-								    output_format,
-								    pipe_bpp, 64);
-			dsc_slice_count =
-				intel_dp_dsc_get_slice_count(connector,
-							     target_clock,
-							     mode->hdisplay,
-							     num_joined_pipes);
+			unsigned long bw_overhead_flags = 0;
+
+			if (!drm_dp_is_uhbr_rate(max_link_clock))
+				bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
+
+			dsc = intel_dp_mode_valid_with_dsc(connector,
+							   max_link_clock, max_lanes,
+							   target_clock, mode->hdisplay,
+							   num_joined_pipes,
+							   output_format, pipe_bpp,
+							   bw_overhead_flags);
 		}
-
-		dsc = dsc_max_compressed_bpp && dsc_slice_count;
 	}
 
 	if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc)
@@ -2704,6 +2700,35 @@ static int compute_max_compressed_bpp_x16(struct intel_connector *connector,
 	return max_link_bpp_x16;
 }
 
+bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
+				  int link_clock, int lane_count,
+				  int mode_clock, int mode_hdisplay,
+				  int num_joined_pipes,
+				  enum intel_output_format output_format,
+				  int pipe_bpp, unsigned long bw_overhead_flags)
+{
+	struct intel_display *display = to_intel_display(connector);
+	int dsc_max_compressed_bpp;
+	int dsc_slice_count;
+
+	dsc_max_compressed_bpp =
+		intel_dp_dsc_get_max_compressed_bpp(display,
+						    link_clock,
+						    lane_count,
+						    mode_clock,
+						    mode_hdisplay,
+						    num_joined_pipes,
+						    output_format,
+						    pipe_bpp, 64);
+	dsc_slice_count =
+		intel_dp_dsc_get_slice_count(connector,
+					     mode_clock,
+					     mode_hdisplay,
+					     num_joined_pipes);
+
+	return dsc_max_compressed_bpp && dsc_slice_count;
+}
+
 /*
  * Calculate the output link min, max bpp values in limits based on the pipe bpp
  * range, crtc_state and dsc mode. Return true on success.
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 489b8c945da39..0ec7baec7a8e8 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -150,6 +150,13 @@ u16 intel_dp_dsc_get_max_compressed_bpp(struct intel_display *display,
 					enum intel_output_format output_format,
 					u32 pipe_bpp,
 					u32 timeslots);
+
+bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
+				  int link_clock, int lane_count,
+				  int mode_clock, int mode_hdisplay,
+				  int num_joined_pipes,
+				  enum intel_output_format output_format,
+				  int pipe_bpp, unsigned long bw_overhead_flags);
 bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16);
 u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 				int mode_clock, int mode_hdisplay,
diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index 0db6ed2d9664c..e3f8679e95252 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -1462,8 +1462,6 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
 		DRM_DP_BW_OVERHEAD_MST | DRM_DP_BW_OVERHEAD_SSC_REF_CLK;
 	int ret;
 	bool dsc = false;
-	u16 dsc_max_compressed_bpp = 0;
-	u8 dsc_slice_count = 0;
 	int target_clock = mode->clock;
 	int num_joined_pipes;
 
@@ -1522,31 +1520,22 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
 		return 0;
 	}
 
-	if (intel_dp_has_dsc(connector)) {
+	if (intel_dp_has_dsc(connector) && drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
 		/*
 		 * TBD pass the connector BPC,
 		 * for now U8_MAX so that max BPC on that platform would be picked
 		 */
 		int pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
 
-		if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
-			dsc_max_compressed_bpp =
-				intel_dp_dsc_get_max_compressed_bpp(display,
-								    max_link_clock,
-								    max_lanes,
-								    target_clock,
-								    mode->hdisplay,
-								    num_joined_pipes,
-								    INTEL_OUTPUT_FORMAT_RGB,
-								    pipe_bpp, 64);
-			dsc_slice_count =
-				intel_dp_dsc_get_slice_count(connector,
-							     target_clock,
-							     mode->hdisplay,
-							     num_joined_pipes);
-		}
+		if (!drm_dp_is_uhbr_rate(max_link_clock))
+			bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
 
-		dsc = dsc_max_compressed_bpp && dsc_slice_count;
+		dsc = intel_dp_mode_valid_with_dsc(connector,
+						   max_link_clock, max_lanes,
+						   target_clock, mode->hdisplay,
+						   num_joined_pipes,
+						   INTEL_OUTPUT_FORMAT_RGB, pipe_bpp,
+						   bw_overhead_flags);
 	}
 
 	if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) && !dsc) {
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 28/50] drm/i915/dp: Unify detect and compute time DSC mode BW validation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (26 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 27/50] drm/i915/dp: Add intel_dp_mode_valid_with_dsc() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 14:29   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs Imre Deak
                   ` (26 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Atm, a DP DSC video mode's required BW vs. the available BW is
determined by calculating the maximum compressed BPP value allowed by
the available BW. Doing that using a closed-form formula as it's done
atm (vs. an iterative way) is problematic, since the overhead of the
required BW itself depends on the BPP value being calculated. Instead of
that calculate the required BW for the minimum compressed BPP value
supported both by the source and the sink and check this BW against the
available BW. This change also aligns the BW calculation during mode
validation with how this is done during state computation, calculating
the required effective data rate with the corresponding BW overhead.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 137 ++++--------------------
 drivers/gpu/drm/i915/display/intel_dp.h |   8 --
 2 files changed, 18 insertions(+), 127 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index b40edf4febcb7..8b601994bb138 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -895,49 +895,6 @@ static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
 	return 0;
 }
 
-static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp, u32 pipe_bpp)
-{
-	u32 bits_per_pixel = bpp;
-
-	/* Error out if the max bpp is less than smallest allowed valid bpp */
-	if (bits_per_pixel < valid_dsc_bpp[0]) {
-		drm_dbg_kms(display->drm, "Unsupported BPP %u, min %u\n",
-			    bits_per_pixel, valid_dsc_bpp[0]);
-		return 0;
-	}
-
-	/* From XE_LPD onwards we support from bpc upto uncompressed bpp-1 BPPs */
-	if (DISPLAY_VER(display) >= 13) {
-		bits_per_pixel = min(bits_per_pixel, pipe_bpp - 1);
-
-		/*
-		 * According to BSpec, 27 is the max DSC output bpp,
-		 * 8 is the min DSC output bpp.
-		 * While we can still clamp higher bpp values to 27, saving bandwidth,
-		 * if it is required to oompress up to bpp < 8, means we can't do
-		 * that and probably means we can't fit the required mode, even with
-		 * DSC enabled.
-		 */
-		if (bits_per_pixel < 8) {
-			drm_dbg_kms(display->drm,
-				    "Unsupported BPP %u, min 8\n",
-				    bits_per_pixel);
-			return 0;
-		}
-		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
-	} else {
-		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
-
-		/* Find the nearest match in the array of known BPPs from VESA */
-		link_bpp_x16 = align_max_vesa_compressed_bpp_x16(link_bpp_x16);
-
-		drm_WARN_ON(display->drm, fxp_q4_to_frac(link_bpp_x16));
-		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
-	}
-
-	return bits_per_pixel;
-}
-
 static int bigjoiner_interface_bits(struct intel_display *display)
 {
 	return DISPLAY_VER(display) >= 14 ? 36 : 24;
@@ -1001,64 +958,6 @@ u32 get_max_compressed_bpp_with_joiner(struct intel_display *display,
 	return max_bpp;
 }
 
-/* TODO: return a bpp_x16 value */
-u16 intel_dp_dsc_get_max_compressed_bpp(struct intel_display *display,
-					u32 link_clock, u32 lane_count,
-					u32 mode_clock, u32 mode_hdisplay,
-					int num_joined_pipes,
-					enum intel_output_format output_format,
-					u32 pipe_bpp,
-					u32 timeslots)
-{
-	u32 bits_per_pixel, joiner_max_bpp;
-
-	/*
-	 * Available Link Bandwidth(Kbits/sec) = (NumberOfLanes)*
-	 * (LinkSymbolClock)* 8 * (TimeSlots / 64)
-	 * for SST -> TimeSlots is 64(i.e all TimeSlots that are available)
-	 * for MST -> TimeSlots has to be calculated, based on mode requirements
-	 *
-	 * Due to FEC overhead, the available bw is reduced to 97.2261%.
-	 * To support the given mode:
-	 * Bandwidth required should be <= Available link Bandwidth * FEC Overhead
-	 * =>ModeClock * bits_per_pixel <= Available Link Bandwidth * FEC Overhead
-	 * =>bits_per_pixel <= Available link Bandwidth * FEC Overhead / ModeClock
-	 * =>bits_per_pixel <= (NumberOfLanes * LinkSymbolClock) * 8 (TimeSlots / 64) /
-	 *		       (ModeClock / FEC Overhead)
-	 * =>bits_per_pixel <= (NumberOfLanes * LinkSymbolClock * TimeSlots) /
-	 *		       (ModeClock / FEC Overhead * 8)
-	 */
-	bits_per_pixel = ((link_clock * lane_count) * timeslots) /
-			 (intel_dp_mode_to_fec_clock(mode_clock) * 8);
-
-	/* Bandwidth required for 420 is half, that of 444 format */
-	if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
-		bits_per_pixel *= 2;
-
-	/*
-	 * According to DSC 1.2a Section 4.1.1 Table 4.1 the maximum
-	 * supported PPS value can be 63.9375 and with the further
-	 * mention that for 420, 422 formats, bpp should be programmed double
-	 * the target bpp restricting our target bpp to be 31.9375 at max.
-	 */
-	if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
-		bits_per_pixel = min_t(u32, bits_per_pixel, 31);
-
-	drm_dbg_kms(display->drm, "Max link bpp is %u for %u timeslots "
-				"total bw %u pixel clock %u\n",
-				bits_per_pixel, timeslots,
-				(link_clock * lane_count * 8),
-				intel_dp_mode_to_fec_clock(mode_clock));
-
-	joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, mode_clock,
-							    mode_hdisplay, num_joined_pipes);
-	bits_per_pixel = min(bits_per_pixel, joiner_max_bpp);
-
-	bits_per_pixel = intel_dp_dsc_nearest_valid_bpp(display, bits_per_pixel, pipe_bpp);
-
-	return bits_per_pixel;
-}
-
 u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 				int mode_clock, int mode_hdisplay,
 				int num_joined_pipes)
@@ -2707,26 +2606,26 @@ bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
 				  enum intel_output_format output_format,
 				  int pipe_bpp, unsigned long bw_overhead_flags)
 {
-	struct intel_display *display = to_intel_display(connector);
-	int dsc_max_compressed_bpp;
-	int dsc_slice_count;
+	struct intel_dp *intel_dp = intel_attached_dp(connector);
+	int min_bpp_x16 = compute_min_compressed_bpp_x16(connector, output_format);
+	int max_bpp_x16 = compute_max_compressed_bpp_x16(connector,
+							 mode_clock, mode_hdisplay,
+							 num_joined_pipes,
+							 output_format,
+							 pipe_bpp, INT_MAX);
+	int dsc_slice_count = intel_dp_dsc_get_slice_count(connector,
+							   mode_clock,
+							   mode_hdisplay,
+							   num_joined_pipes);
 
-	dsc_max_compressed_bpp =
-		intel_dp_dsc_get_max_compressed_bpp(display,
-						    link_clock,
-						    lane_count,
-						    mode_clock,
-						    mode_hdisplay,
-						    num_joined_pipes,
-						    output_format,
-						    pipe_bpp, 64);
-	dsc_slice_count =
-		intel_dp_dsc_get_slice_count(connector,
-					     mode_clock,
-					     mode_hdisplay,
-					     num_joined_pipes);
+	if (min_bpp_x16 <= 0 || min_bpp_x16 > max_bpp_x16)
+		return false;
 
-	return dsc_max_compressed_bpp && dsc_slice_count;
+	return is_bw_sufficient_for_dsc_config(intel_dp,
+					       link_clock, lane_count,
+					       mode_clock, mode_hdisplay,
+					       dsc_slice_count, min_bpp_x16,
+					       bw_overhead_flags);
 }
 
 /*
diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
index 0ec7baec7a8e8..25bfbfd291b0a 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.h
+++ b/drivers/gpu/drm/i915/display/intel_dp.h
@@ -143,14 +143,6 @@ bool intel_digital_port_connected(struct intel_encoder *encoder);
 bool intel_digital_port_connected_locked(struct intel_encoder *encoder);
 int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
 				 u8 dsc_max_bpc);
-u16 intel_dp_dsc_get_max_compressed_bpp(struct intel_display *display,
-					u32 link_clock, u32 lane_count,
-					u32 mode_clock, u32 mode_hdisplay,
-					int num_joined_pipes,
-					enum intel_output_format output_format,
-					u32 pipe_bpp,
-					u32 timeslots);
-
 bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
 				  int link_clock, int lane_count,
 				  int mode_clock, int mode_hdisplay,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (27 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 28/50] drm/i915/dp: Unify detect and compute time DSC mode BW validation Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 14:34   ` Govindapillai, Vinod
  2025-12-12 14:39   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 30/50] drm/i915/dp: Simplify computing DSC BPPs for eDP Imre Deak
                   ` (25 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The minimum/maximum compressed BPP values are aligned/bounded in
intel_dp_compute_link_bpp_limits() to the corresponding source limits.
The minimum compressed BPP value doesn't change afterwards, so no need
to align it again, remove that.

The maximum compressed BPP, which depends on the pipe BPP value still
needs to be aligned, since the pipe BPP value could change after the
above limits were computed, via intel_dp_force_dsc_pipe_bpp(). Use the
corresponding helper for this alignment instead of open-coding the same.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++------------------
 1 file changed, 5 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 8b601994bb138..e351774f508db 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2217,20 +2217,15 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 	struct intel_display *display = to_intel_display(intel_dp);
 	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
 	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
-	int link_bpp_x16;
 	int bpp_x16;
 	int ret;
 
+	min_bpp_x16 = limits->link.min_bpp_x16;
 	max_bpp_x16 = limits->link.max_bpp_x16;
 	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
 
-	/* Compressed BPP should be less than the Input DSC bpp */
-	link_bpp_x16 = intel_dp_output_format_link_bpp_x16(pipe_config->output_format, pipe_bpp);
-	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
-
-	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
-	min_bpp_x16 = round_up(limits->link.min_bpp_x16, bpp_step_x16);
-	max_bpp_x16 = round_down(max_bpp_x16, bpp_step_x16);
+	max_bpp_x16 = align_max_compressed_bpp_x16(connector, pipe_config->output_format,
+						   pipe_bpp, max_bpp_x16);
 
 	for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16 -= bpp_step_x16) {
 		if (!intel_dp_dsc_valid_compressed_bpp(intel_dp, bpp_x16))
@@ -2346,8 +2341,6 @@ static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 	struct intel_connector *connector =
 		to_intel_connector(conn_state->connector);
 	int pipe_bpp, forced_bpp;
-	int dsc_min_bpp;
-	int dsc_max_bpp;
 
 	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
 
@@ -2367,15 +2360,9 @@ static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 	pipe_config->port_clock = limits->max_rate;
 	pipe_config->lane_count = limits->max_lane_count;
 
-	dsc_min_bpp = fxp_q4_to_int_roundup(limits->link.min_bpp_x16);
-
-	dsc_max_bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
-
-	/* Compressed BPP should be less than the Input DSC bpp */
-	dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
-
 	pipe_config->dsc.compressed_bpp_x16 =
-		fxp_q4_from_int(max(dsc_min_bpp, dsc_max_bpp));
+		align_max_compressed_bpp_x16(connector, pipe_config->output_format,
+					     pipe_bpp, limits->link.max_bpp_x16);
 
 	pipe_config->pipe_bpp = pipe_bpp;
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 30/50] drm/i915/dp: Simplify computing DSC BPPs for eDP
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (28 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 14:45   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST Imre Deak
                   ` (24 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The maximum pipe BPP value (used as the DSC input BPP) has been aligned
already to the corresponding source/sink input BPP capabilities in
intel_dp_compute_config_limits(). So it isn't needed to perform the same
alignment again in intel_edp_dsc_compute_pipe_bpp() called later, this
function can simply use the already aligned maximum pipe BPP value, do
that.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 16 +++-------------
 1 file changed, 3 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index e351774f508db..ee33759a2f5d7 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2337,26 +2337,16 @@ static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 					  struct drm_connector_state *conn_state,
 					  const struct link_config_limits *limits)
 {
-	struct intel_display *display = to_intel_display(intel_dp);
 	struct intel_connector *connector =
 		to_intel_connector(conn_state->connector);
 	int pipe_bpp, forced_bpp;
 
 	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
-
-	if (forced_bpp) {
+	if (forced_bpp)
 		pipe_bpp = forced_bpp;
-	} else {
-		int max_bpc = limits->pipe.max_bpp / 3;
+	else
+		pipe_bpp = limits->pipe.max_bpp;
 
-		/* For eDP use max bpp that can be supported with DSC. */
-		pipe_bpp = intel_dp_dsc_compute_max_bpp(connector, max_bpc);
-		if (!is_dsc_pipe_bpp_sufficient(limits, pipe_bpp)) {
-			drm_dbg_kms(display->drm,
-				    "Computed BPC is not in DSC BPC limits\n");
-			return -EINVAL;
-		}
-	}
 	pipe_config->port_clock = limits->max_rate;
 	pipe_config->lane_count = limits->max_lane_count;
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (29 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 30/50] drm/i915/dp: Simplify computing DSC BPPs for eDP Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 14:59   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 32/50] drm/i915/dp: Simplify computing forced DSC BPP " Imre Deak
                   ` (23 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The maximum pipe BPP value (used as the DSC input BPP) has been aligned
already to the corresponding source/sink input BPP capabilities in
intel_dp_compute_config_limits(). So it isn't needed to perform the same
alignment again in intel_dp_dsc_compute_pipe_bpp() called later, this
function can simply use the already aligned maximum pipe BPP value, do
that.

Also, there is no point in trying pipe BPP values lower than the
maximum: this would only make dsc_compute_compressed_bpp() start with a
lower _compressed_ BPP value, but this lower compressed BPP value has
been tried already when dsc_compute_compressed_bpp() was called with the
higher pipe BPP value (i.e. the first dsc_compute_compressed_bpp() call
tries already all the possible compressed BPP values which are all below
the pipe BPP value passed to it). Simplify the function accordingly
trying only the maximum pipe BPP value.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 29 +++++++------------------
 1 file changed, 8 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index ee33759a2f5d7..902f3a054a971 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2294,11 +2294,8 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 					 struct drm_connector_state *conn_state,
 					 const struct link_config_limits *limits)
 {
-	const struct intel_connector *connector =
-		to_intel_connector(conn_state->connector);
-	u8 dsc_bpc[3] = {};
 	int forced_bpp, pipe_bpp;
-	int num_bpc, i, ret;
+	int ret;
 
 	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
 
@@ -2311,25 +2308,15 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 		}
 	}
 
-	/*
-	 * Get the maximum DSC bpc that will be supported by any valid
-	 * link configuration and compressed bpp.
-	 */
-	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd, dsc_bpc);
-	for (i = 0; i < num_bpc; i++) {
-		pipe_bpp = dsc_bpc[i] * 3;
-		if (pipe_bpp < limits->pipe.min_bpp || pipe_bpp > limits->pipe.max_bpp)
-			continue;
+	pipe_bpp = limits->pipe.max_bpp;
+	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
+					 limits, pipe_bpp);
+	if (ret)
+		return -EINVAL;
 
-		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
-						 limits, pipe_bpp);
-		if (ret == 0) {
-			pipe_config->pipe_bpp = pipe_bpp;
-			return 0;
-		}
-	}
+	pipe_config->pipe_bpp = pipe_bpp;
 
-	return -EINVAL;
+	return 0;
 }
 
 static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 32/50] drm/i915/dp: Simplify computing forced DSC BPP for DP-SST
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (30 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 15:21   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 33/50] drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP Imre Deak
                   ` (22 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

If dsc_compute_compressed_bpp() failed with a forced pipe BPP value
(where the forced pipe BPP value itself is valid within the min/max pipe
BPP limits), the function will also fail when called with the maximum
pipe BPP value: dsc_compute_compressed_bpp() will try all compressed
BPPs below the passed in pipe BPP value and if the function failed with
a given (low) compressed BPP value it will also fail with a compressed
BPP value higher than the one which failed already.

Based on the above remove the logic to retry computing a compressed BPP
value with the maximum pipe BPP value if computing the compressed BPP
failed already with the (lower) forced pipe BPP value.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 902f3a054a971..a921092e760b5 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2298,17 +2298,11 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 	int ret;
 
 	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
+	if (forced_bpp)
+		pipe_bpp = forced_bpp;
+	else
+		pipe_bpp = limits->pipe.max_bpp;
 
-	if (forced_bpp) {
-		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
-						 limits, forced_bpp);
-		if (ret == 0) {
-			pipe_config->pipe_bpp = forced_bpp;
-			return 0;
-		}
-	}
-
-	pipe_bpp = limits->pipe.max_bpp;
 	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
 					 limits, pipe_bpp);
 	if (ret)
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 33/50] drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (31 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 32/50] drm/i915/dp: Simplify computing forced DSC BPP " Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 15:38   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 34/50] drm/i915/dp: Simplify eDP vs. DP compressed BPP computation Imre Deak
                   ` (21 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Move computing the eDP compressed BPP value to the function computing
this for DP, allowing further simplifications later.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index a921092e760b5..81240529337bc 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2226,6 +2226,14 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
 
 	max_bpp_x16 = align_max_compressed_bpp_x16(connector, pipe_config->output_format,
 						   pipe_bpp, max_bpp_x16);
+	if (intel_dp_is_edp(intel_dp)) {
+		pipe_config->port_clock = limits->max_rate;
+		pipe_config->lane_count = limits->max_lane_count;
+
+		pipe_config->dsc.compressed_bpp_x16 = max_bpp_x16;
+
+		return 0;
+	}
 
 	for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16 -= bpp_step_x16) {
 		if (!intel_dp_dsc_valid_compressed_bpp(intel_dp, bpp_x16))
@@ -2318,9 +2326,8 @@ static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 					  struct drm_connector_state *conn_state,
 					  const struct link_config_limits *limits)
 {
-	struct intel_connector *connector =
-		to_intel_connector(conn_state->connector);
 	int pipe_bpp, forced_bpp;
+	int ret;
 
 	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
 	if (forced_bpp)
@@ -2328,12 +2335,10 @@ static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 	else
 		pipe_bpp = limits->pipe.max_bpp;
 
-	pipe_config->port_clock = limits->max_rate;
-	pipe_config->lane_count = limits->max_lane_count;
-
-	pipe_config->dsc.compressed_bpp_x16 =
-		align_max_compressed_bpp_x16(connector, pipe_config->output_format,
-					     pipe_bpp, limits->link.max_bpp_x16);
+	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
+					 limits, pipe_bpp);
+	if (ret)
+		return -EINVAL;
 
 	pipe_config->pipe_bpp = pipe_bpp;
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 34/50] drm/i915/dp: Simplify eDP vs. DP compressed BPP computation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (32 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 33/50] drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-12 15:39   ` Govindapillai, Vinod
  2025-11-27 17:50 ` [PATCH 35/50] drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST Imre Deak
                   ` (20 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

intel_edp_dsc_compute_pipe_bpp() matches now
intel_dp_dsc_compute_pipe_bpp(), remove the former function.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 32 ++-----------------------
 1 file changed, 2 insertions(+), 30 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 81240529337bc..de59b93388f41 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2321,30 +2321,6 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
 	return 0;
 }
 
-static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
-					  struct intel_crtc_state *pipe_config,
-					  struct drm_connector_state *conn_state,
-					  const struct link_config_limits *limits)
-{
-	int pipe_bpp, forced_bpp;
-	int ret;
-
-	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
-	if (forced_bpp)
-		pipe_bpp = forced_bpp;
-	else
-		pipe_bpp = limits->pipe.max_bpp;
-
-	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
-					 limits, pipe_bpp);
-	if (ret)
-		return -EINVAL;
-
-	pipe_config->pipe_bpp = pipe_bpp;
-
-	return 0;
-}
-
 /*
  * Return whether FEC must be enabled for 8b10b SST or MST links. On 128b132b
  * links FEC is always enabled implicitly by the HW, so this function returns
@@ -2396,12 +2372,8 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 	 * figured out for DP MST DSC.
 	 */
 	if (!is_mst) {
-		if (intel_dp_is_edp(intel_dp))
-			ret = intel_edp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
-							     conn_state, limits);
-		else
-			ret = intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
-							    conn_state, limits);
+		ret = intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
+						    conn_state, limits);
 		if (ret) {
 			drm_dbg_kms(display->drm,
 				    "No Valid pipe bpp for given mode ret = %d\n", ret);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 35/50] drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (33 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 34/50] drm/i915/dp: Simplify eDP vs. DP compressed BPP computation Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-08 13:08   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 36/50] drm/i915/dsc: Track the detaild DSC slice configuration Imre Deak
                   ` (19 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

The minimum/maximum DSC input (i.e. pipe) and compressed (i.e. link) BPP
limits are computed already in intel_dp_compute_config_limits(), so
there is no need to do this again in
mst_stream_dsc_compute_link_config() called later. Remove the
corresponding alignments from the latter function and use the
precomputed (aligned and within bounds) maximum pipe BPP and the min/max
compressed BPP values instead as-is.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp_mst.c | 48 +++------------------
 1 file changed, 6 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
index e3f8679e95252..24f8e60df9ac1 100644
--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
@@ -463,57 +463,21 @@ static int mst_stream_dsc_compute_link_config(struct intel_dp *intel_dp,
 {
 	struct intel_display *display = to_intel_display(intel_dp);
 	struct intel_connector *connector = to_intel_connector(conn_state->connector);
-	int num_bpc;
-	u8 dsc_bpc[3] = {};
-	int min_bpp, max_bpp, sink_min_bpp, sink_max_bpp;
-	int min_compressed_bpp_x16, max_compressed_bpp_x16;
-	int bpp_step_x16;
 
-	max_bpp = limits->pipe.max_bpp;
-	min_bpp = limits->pipe.min_bpp;
-
-	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
-						       dsc_bpc);
-
-	drm_dbg_kms(display->drm, "DSC Source supported min bpp %d max bpp %d\n",
-		    min_bpp, max_bpp);
-
-	sink_min_bpp = min_array(dsc_bpc, num_bpc) * 3;
-	sink_max_bpp = max_array(dsc_bpc, num_bpc) * 3;
-
-	drm_dbg_kms(display->drm, "DSC Sink supported min bpp %d max bpp %d\n",
-		    sink_min_bpp, sink_max_bpp);
-
-	if (min_bpp < sink_min_bpp)
-		min_bpp = sink_min_bpp;
-
-	if (max_bpp > sink_max_bpp)
-		max_bpp = sink_max_bpp;
-
-	crtc_state->pipe_bpp = max_bpp;
-
-	min_compressed_bpp_x16 = limits->link.min_bpp_x16;
-	max_compressed_bpp_x16 = limits->link.max_bpp_x16;
+	crtc_state->pipe_bpp = limits->pipe.max_bpp;
 
 	drm_dbg_kms(display->drm,
 		    "DSC Sink supported compressed min bpp " FXP_Q4_FMT " compressed max bpp " FXP_Q4_FMT "\n",
-		    FXP_Q4_ARGS(min_compressed_bpp_x16), FXP_Q4_ARGS(max_compressed_bpp_x16));
-
-	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
-
-	max_compressed_bpp_x16 = min(max_compressed_bpp_x16, fxp_q4_from_int(crtc_state->pipe_bpp) - bpp_step_x16);
-
-	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
-	min_compressed_bpp_x16 = round_up(min_compressed_bpp_x16, bpp_step_x16);
-	max_compressed_bpp_x16 = round_down(max_compressed_bpp_x16, bpp_step_x16);
+		    FXP_Q4_ARGS(limits->link.min_bpp_x16), FXP_Q4_ARGS(limits->link.max_bpp_x16));
 
 	crtc_state->lane_count = limits->max_lane_count;
 	crtc_state->port_clock = limits->max_rate;
 
 	return intel_dp_mtp_tu_compute_config(intel_dp, crtc_state, conn_state,
-					      min_compressed_bpp_x16,
-					      max_compressed_bpp_x16,
-					      bpp_step_x16, true);
+					      limits->link.min_bpp_x16,
+					      limits->link.max_bpp_x16,
+					      intel_dp_dsc_bpp_step_x16(connector),
+					      true);
 }
 
 static int mode_hblank_period_ns(const struct drm_display_mode *mode)
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 36/50] drm/i915/dsc: Track the detaild DSC slice configuration
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (34 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 35/50] drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09  8:24   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 37/50] drm/i915/dsc: Track the DSC stream count in the DSC slice config state Imre Deak
                   ` (18 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Add a way to track the detailed DSC pipes-per-line, streams-per-pipe,
slices-per-stream configuration instead of the current streams-per-pipe
and slices-per-line value. This way describes the slice configuration in
a clearer way, for instance providing a

2 pipes-per-line x 2 streams-per-pipe x 2 slices-per-stream = 8 slices-per-line

view, instead of the current, coarser

2 streams-per-pipe, 8 slices-per-line

view, the former better reflecting that each DSC stream engine has 2
slices. This also let's optimizing the configuration in a
simpler/clearer way, for instance using 1 stream x 2 slices, or 1 stream
x 4 slices instead of the current 2 stream x 1 slice, or 2 streams x 2
slices configuration (so that 1 DSC stream engine can be powered off in
each pipe).

Follow-up changes will convert the current slices-per-line computation
logic to compute instead the above detailed slice configuration.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_display_types.h | 5 +++++
 drivers/gpu/drm/i915/display/intel_vdsc.c          | 5 +++++
 drivers/gpu/drm/i915/display/intel_vdsc.h          | 2 ++
 3 files changed, 12 insertions(+)

diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 38702a9e0f508..a3de93cdcbde0 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -1306,6 +1306,11 @@ struct intel_crtc_state {
 		bool compression_enabled_on_link;
 		bool compression_enable;
 		int num_streams;
+		struct intel_dsc_slice_config {
+			int pipes_per_line;
+			int streams_per_pipe;
+			int slices_per_stream;
+		} slice_config;
 		/* Compressed Bpp in U6.4 format (first 4 bits for fractional part) */
 		u16 compressed_bpp_x16;
 		u8 slice_count;
diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c b/drivers/gpu/drm/i915/display/intel_vdsc.c
index 0e727fc5e80c1..8aa480e3d1c9d 100644
--- a/drivers/gpu/drm/i915/display/intel_vdsc.c
+++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
@@ -35,6 +35,11 @@ bool intel_dsc_source_support(const struct intel_crtc_state *crtc_state)
 	return true;
 }
 
+int intel_dsc_line_slice_count(const struct intel_dsc_slice_config *config)
+{
+	return config->pipes_per_line * config->streams_per_pipe * config->slices_per_stream;
+}
+
 static bool is_pipe_dsc(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
 {
 	struct intel_display *display = to_intel_display(crtc);
diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.h b/drivers/gpu/drm/i915/display/intel_vdsc.h
index 99f64ac54b273..e61116d5297c8 100644
--- a/drivers/gpu/drm/i915/display/intel_vdsc.h
+++ b/drivers/gpu/drm/i915/display/intel_vdsc.h
@@ -13,9 +13,11 @@ struct drm_printer;
 enum transcoder;
 struct intel_crtc;
 struct intel_crtc_state;
+struct intel_dsc_slice_config;
 struct intel_encoder;
 
 bool intel_dsc_source_support(const struct intel_crtc_state *crtc_state);
+int intel_dsc_line_slice_count(const struct intel_dsc_slice_config *config);
 void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state);
 void intel_dsc_enable(const struct intel_crtc_state *crtc_state);
 void intel_dsc_disable(const struct intel_crtc_state *crtc_state);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 37/50] drm/i915/dsc: Track the DSC stream count in the DSC slice config state
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (35 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 36/50] drm/i915/dsc: Track the detaild DSC slice configuration Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09  8:28   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc() Imre Deak
                   ` (17 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Move the tracking for the DSC stream count from
intel_crtc_state::dsc.num_streams to
intel_crtc_state::dsc.slice_config.streams_per_pipe.

While at it add a TODO comment to read out the full DSC configuration
from HW including the pipes-per-line and slices-per-stream values.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/icl_dsi.c             |  4 ++--
 drivers/gpu/drm/i915/display/intel_display.c       |  2 +-
 drivers/gpu/drm/i915/display/intel_display_types.h |  1 -
 drivers/gpu/drm/i915/display/intel_dp.c            |  6 +++---
 drivers/gpu/drm/i915/display/intel_vdsc.c          | 11 ++++++-----
 5 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
index 9230792960f29..90076839e7152 100644
--- a/drivers/gpu/drm/i915/display/icl_dsi.c
+++ b/drivers/gpu/drm/i915/display/icl_dsi.c
@@ -1626,9 +1626,9 @@ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
 
 	/* FIXME: split only when necessary */
 	if (crtc_state->dsc.slice_count > 1)
-		crtc_state->dsc.num_streams = 2;
+		crtc_state->dsc.slice_config.streams_per_pipe = 2;
 	else
-		crtc_state->dsc.num_streams = 1;
+		crtc_state->dsc.slice_config.streams_per_pipe = 1;
 
 	/* FIXME: initialize from VBT */
 	vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
index 04f5c488f3998..aef6cfa7bde8e 100644
--- a/drivers/gpu/drm/i915/display/intel_display.c
+++ b/drivers/gpu/drm/i915/display/intel_display.c
@@ -5450,7 +5450,7 @@ intel_pipe_config_compare(const struct intel_crtc_state *current_config,
 	PIPE_CONF_CHECK_I(dsc.config.nsl_bpg_offset);
 
 	PIPE_CONF_CHECK_BOOL(dsc.compression_enable);
-	PIPE_CONF_CHECK_I(dsc.num_streams);
+	PIPE_CONF_CHECK_I(dsc.slice_config.streams_per_pipe);
 	PIPE_CONF_CHECK_I(dsc.compressed_bpp_x16);
 
 	PIPE_CONF_CHECK_BOOL(splitter.enable);
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index a3de93cdcbde0..574fc7ff33c97 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -1305,7 +1305,6 @@ struct intel_crtc_state {
 		/* Only used for state computation, not read out from the HW. */
 		bool compression_enabled_on_link;
 		bool compression_enable;
-		int num_streams;
 		struct intel_dsc_slice_config {
 			int pipes_per_line;
 			int streams_per_pipe;
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index de59b93388f41..03266511841e2 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2417,11 +2417,11 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 	 */
 	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
 	    pipe_config->dsc.slice_count == 12)
-		pipe_config->dsc.num_streams = 3;
+		pipe_config->dsc.slice_config.streams_per_pipe = 3;
 	else if (pipe_config->joiner_pipes || pipe_config->dsc.slice_count > 1)
-		pipe_config->dsc.num_streams = 2;
+		pipe_config->dsc.slice_config.streams_per_pipe = 2;
 	else
-		pipe_config->dsc.num_streams = 1;
+		pipe_config->dsc.slice_config.streams_per_pipe = 1;
 
 	ret = intel_dp_dsc_compute_params(connector, pipe_config);
 	if (ret < 0) {
diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c b/drivers/gpu/drm/i915/display/intel_vdsc.c
index 8aa480e3d1c9d..2b27671f97b32 100644
--- a/drivers/gpu/drm/i915/display/intel_vdsc.c
+++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
@@ -421,7 +421,7 @@ intel_dsc_power_domain(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
 
 static int intel_dsc_get_vdsc_per_pipe(const struct intel_crtc_state *crtc_state)
 {
-	return crtc_state->dsc.num_streams;
+	return crtc_state->dsc.slice_config.streams_per_pipe;
 }
 
 int intel_dsc_get_num_vdsc_instances(const struct intel_crtc_state *crtc_state)
@@ -1023,12 +1023,13 @@ void intel_dsc_get_config(struct intel_crtc_state *crtc_state)
 	if (!crtc_state->dsc.compression_enable)
 		goto out;
 
+	/* TODO: Read out slice_config.pipes_per_line/slices_per_stream as well */
 	if (dss_ctl1 & JOINER_ENABLE && dss_ctl2 & (VDSC2_ENABLE | SMALL_JOINER_CONFIG_3_ENGINES))
-		crtc_state->dsc.num_streams = 3;
+		crtc_state->dsc.slice_config.streams_per_pipe = 3;
 	else if (dss_ctl1 & JOINER_ENABLE && dss_ctl2 & VDSC1_ENABLE)
-		crtc_state->dsc.num_streams = 2;
+		crtc_state->dsc.slice_config.streams_per_pipe = 2;
 	else
-		crtc_state->dsc.num_streams = 1;
+		crtc_state->dsc.slice_config.streams_per_pipe = 1;
 
 	intel_dsc_get_pps_config(crtc_state);
 out:
@@ -1042,7 +1043,7 @@ static void intel_vdsc_dump_state(struct drm_printer *p, int indent,
 			  "dsc-dss: compressed-bpp:" FXP_Q4_FMT ", slice-count: %d, num_streams: %d\n",
 			  FXP_Q4_ARGS(crtc_state->dsc.compressed_bpp_x16),
 			  crtc_state->dsc.slice_count,
-			  crtc_state->dsc.num_streams);
+			  crtc_state->dsc.slice_config.streams_per_pipe);
 }
 
 void intel_vdsc_state_dump(struct drm_printer *p, int indent,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (36 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 37/50] drm/i915/dsc: Track the DSC stream count in the DSC slice config state Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09  8:47   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 39/50] drm/i915/dsi: Track the detailed DSC slice configuration Imre Deak
                   ` (16 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Move the initialization of the DSI DSC streams-per-pipe value to
fill_dsc() next to where the corresponding (per-line) slice_count value
is initialized. This allows converting the initialization to use the
detailed slice configuration state in follow-up changes.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/icl_dsi.c    | 6 ------
 drivers/gpu/drm/i915/display/intel_bios.c | 5 +++++
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
index 90076839e7152..9aba3d813daae 100644
--- a/drivers/gpu/drm/i915/display/icl_dsi.c
+++ b/drivers/gpu/drm/i915/display/icl_dsi.c
@@ -1624,12 +1624,6 @@ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
 	if (crtc_state->pipe_bpp < 8 * 3)
 		return -EINVAL;
 
-	/* FIXME: split only when necessary */
-	if (crtc_state->dsc.slice_count > 1)
-		crtc_state->dsc.slice_config.streams_per_pipe = 2;
-	else
-		crtc_state->dsc.slice_config.streams_per_pipe = 1;
-
 	/* FIXME: initialize from VBT */
 	vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
 
diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
index a639c5eb32459..e69fac4f5bdfe 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -3516,10 +3516,14 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 	 * throughput etc. into account.
 	 *
 	 * Also, per spec DSI supports 1, 2, 3 or 4 horizontal slices.
+	 *
+	 * FIXME: split only when necessary
 	 */
 	if (dsc->slices_per_line & BIT(2)) {
+		crtc_state->dsc.slice_config.streams_per_pipe = 2;
 		crtc_state->dsc.slice_count = 4;
 	} else if (dsc->slices_per_line & BIT(1)) {
+		crtc_state->dsc.slice_config.streams_per_pipe = 2;
 		crtc_state->dsc.slice_count = 2;
 	} else {
 		/* FIXME */
@@ -3527,6 +3531,7 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 			drm_dbg_kms(display->drm,
 				    "VBT: Unsupported DSC slice count for DSI\n");
 
+		crtc_state->dsc.slice_config.streams_per_pipe = 1;
 		crtc_state->dsc.slice_count = 1;
 	}
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 39/50] drm/i915/dsi: Track the detailed DSC slice configuration
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (37 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09 12:43   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 40/50] drm/i915/dp: " Imre Deak
                   ` (15 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Add tracking for the DSI DSC pipes-per-line and slices-per-stream value
in the slice config state and compute the current slices-per-line value
using this slice config state. The slices-per-line value used atm will
be removed by a follow-up change after converting all the places using
it to use the detailed slice config instead.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_bios.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
index e69fac4f5bdfe..479c5f0158800 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -41,6 +41,7 @@
 #include "intel_display_utils.h"
 #include "intel_gmbus.h"
 #include "intel_rom.h"
+#include "intel_vdsc.h"
 
 #define _INTEL_BIOS_PRIVATE
 #include "intel_vbt_defs.h"
@@ -3519,12 +3520,14 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 	 *
 	 * FIXME: split only when necessary
 	 */
+	crtc_state->dsc.slice_config.pipes_per_line = 1;
+
 	if (dsc->slices_per_line & BIT(2)) {
 		crtc_state->dsc.slice_config.streams_per_pipe = 2;
-		crtc_state->dsc.slice_count = 4;
+		crtc_state->dsc.slice_config.slices_per_stream = 2;
 	} else if (dsc->slices_per_line & BIT(1)) {
 		crtc_state->dsc.slice_config.streams_per_pipe = 2;
-		crtc_state->dsc.slice_count = 2;
+		crtc_state->dsc.slice_config.slices_per_stream = 1;
 	} else {
 		/* FIXME */
 		if (!(dsc->slices_per_line & BIT(0)))
@@ -3532,9 +3535,11 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 				    "VBT: Unsupported DSC slice count for DSI\n");
 
 		crtc_state->dsc.slice_config.streams_per_pipe = 1;
-		crtc_state->dsc.slice_count = 1;
+		crtc_state->dsc.slice_config.slices_per_stream = 1;
 	}
 
+	crtc_state->dsc.slice_count = intel_dsc_line_slice_count(&crtc_state->dsc.slice_config);
+
 	if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
 	    crtc_state->dsc.slice_count != 0)
 		drm_dbg_kms(display->drm,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 40/50] drm/i915/dp: Track the detailed DSC slice configuration
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (38 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 39/50] drm/i915/dsi: Track the detailed DSC slice configuration Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09 14:06   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 41/50] drm/i915/dsc: Switch to using intel_dsc_line_slice_count() Imre Deak
                   ` (14 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Add tracking for the DP DSC pipes-per-line and slices-per-stream value
in the slice config state and compute the current slices-per-line
(slice_count) value using this slice config. The slices-per-line value
used atm will be removed by a follow-up change after converting all the
places using it to use the slice config instead.

For now the slices-per-stream value is calculated based on the
slices-per-line value (slice_count) calculated by the
drm_dp_dsc_sink_max_slice_count() / intel_dp_dsc_get_slice_count()
functions. In a follow-up change these functions will be converted to
calculate the slices-per-stream value directly, along with the detailed
slice configuration.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 32 ++++++++++++++++---------
 1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 03266511841e2..d17afc18fcfa7 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2356,6 +2356,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		&pipe_config->hw.adjusted_mode;
 	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
 	bool is_mst = intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST);
+	int slices_per_line;
 	int ret;
 
 	/*
@@ -2383,30 +2384,26 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 
 	/* Calculate Slice count */
 	if (intel_dp_is_edp(intel_dp)) {
-		pipe_config->dsc.slice_count =
+		slices_per_line =
 			drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
 							true);
-		if (!pipe_config->dsc.slice_count) {
+		if (!slices_per_line) {
 			drm_dbg_kms(display->drm,
 				    "Unsupported Slice Count %d\n",
-				    pipe_config->dsc.slice_count);
+				    slices_per_line);
 			return -EINVAL;
 		}
 	} else {
-		u8 dsc_dp_slice_count;
-
-		dsc_dp_slice_count =
+		slices_per_line =
 			intel_dp_dsc_get_slice_count(connector,
 						     adjusted_mode->crtc_clock,
 						     adjusted_mode->crtc_hdisplay,
 						     num_joined_pipes);
-		if (!dsc_dp_slice_count) {
+		if (!slices_per_line) {
 			drm_dbg_kms(display->drm,
 				    "Compressed Slice Count not supported\n");
 			return -EINVAL;
 		}
-
-		pipe_config->dsc.slice_count = dsc_dp_slice_count;
 	}
 	/*
 	 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
@@ -2415,14 +2412,27 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 	 * In case of Ultrajoiner along with 12 slices we need to use 3
 	 * VDSC instances.
 	 */
+	pipe_config->dsc.slice_config.pipes_per_line = num_joined_pipes;
+
 	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
-	    pipe_config->dsc.slice_count == 12)
+	    slices_per_line == 12)
 		pipe_config->dsc.slice_config.streams_per_pipe = 3;
-	else if (pipe_config->joiner_pipes || pipe_config->dsc.slice_count > 1)
+	else if (pipe_config->joiner_pipes || slices_per_line > 1)
 		pipe_config->dsc.slice_config.streams_per_pipe = 2;
 	else
 		pipe_config->dsc.slice_config.streams_per_pipe = 1;
 
+	pipe_config->dsc.slice_config.slices_per_stream =
+		slices_per_line /
+		pipe_config->dsc.slice_config.pipes_per_line /
+		pipe_config->dsc.slice_config.streams_per_pipe;
+
+	pipe_config->dsc.slice_count =
+		intel_dsc_line_slice_count(&pipe_config->dsc.slice_config);
+
+	drm_WARN_ON(display->drm,
+		    pipe_config->dsc.slice_count != slices_per_line);
+
 	ret = intel_dp_dsc_compute_params(connector, pipe_config);
 	if (ret < 0) {
 		drm_dbg_kms(display->drm,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 41/50] drm/i915/dsc: Switch to using intel_dsc_line_slice_count()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (39 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 40/50] drm/i915/dp: " Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09 17:14   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 42/50] drm/i915/dp: Factor out intel_dp_dsc_min_slice_count() Imre Deak
                   ` (13 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

By now all the places are updated to track the DSC slice configuration
in intel_crtc_state::dsc.slice_config, so calculate the slices-per-line
value using that config, instead of using
intel_crtc_state::dsc.slice_count caching the same value and remove
the cached slice_count.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_bios.c          |  6 ++----
 drivers/gpu/drm/i915/display/intel_display_types.h |  1 -
 drivers/gpu/drm/i915/display/intel_dp.c            | 11 +++++------
 drivers/gpu/drm/i915/display/intel_vdsc.c          |  7 ++++---
 4 files changed, 11 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
index 479c5f0158800..698e569a48e61 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -3538,14 +3538,12 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 		crtc_state->dsc.slice_config.slices_per_stream = 1;
 	}
 
-	crtc_state->dsc.slice_count = intel_dsc_line_slice_count(&crtc_state->dsc.slice_config);
-
 	if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
-	    crtc_state->dsc.slice_count != 0)
+	    intel_dsc_line_slice_count(&crtc_state->dsc.slice_config) != 0)
 		drm_dbg_kms(display->drm,
 			    "VBT: DSC hdisplay %d not divisible by slice count %d\n",
 			    crtc_state->hw.adjusted_mode.crtc_hdisplay,
-			    crtc_state->dsc.slice_count);
+			    intel_dsc_line_slice_count(&crtc_state->dsc.slice_config));
 
 	/*
 	 * The VBT rc_buffer_block_size and rc_buffer_size definitions
diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
index 574fc7ff33c97..0f56be61f081b 100644
--- a/drivers/gpu/drm/i915/display/intel_display_types.h
+++ b/drivers/gpu/drm/i915/display/intel_display_types.h
@@ -1312,7 +1312,6 @@ struct intel_crtc_state {
 		} slice_config;
 		/* Compressed Bpp in U6.4 format (first 4 bits for fractional part) */
 		u16 compressed_bpp_x16;
-		u8 slice_count;
 		struct drm_dsc_config config;
 	} dsc;
 
diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index d17afc18fcfa7..126048c5233c4 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2031,12 +2031,14 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 			} else {
 				unsigned long bw_overhead_flags =
 					pipe_config->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
+				int line_slice_count =
+					intel_dsc_line_slice_count(&pipe_config->dsc.slice_config);
 
 				if (!is_bw_sufficient_for_dsc_config(intel_dp,
 								     link_rate, lane_count,
 								     adjusted_mode->crtc_clock,
 								     adjusted_mode->hdisplay,
-								     pipe_config->dsc.slice_count,
+								     line_slice_count,
 								     dsc_bpp_x16,
 								     bw_overhead_flags))
 					continue;
@@ -2427,11 +2429,8 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		pipe_config->dsc.slice_config.pipes_per_line /
 		pipe_config->dsc.slice_config.streams_per_pipe;
 
-	pipe_config->dsc.slice_count =
-		intel_dsc_line_slice_count(&pipe_config->dsc.slice_config);
-
 	drm_WARN_ON(display->drm,
-		    pipe_config->dsc.slice_count != slices_per_line);
+		    intel_dsc_line_slice_count(&pipe_config->dsc.slice_config) != slices_per_line);
 
 	ret = intel_dp_dsc_compute_params(connector, pipe_config);
 	if (ret < 0) {
@@ -2449,7 +2448,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		    "Compressed Bpp = " FXP_Q4_FMT " Slice Count = %d\n",
 		    pipe_config->pipe_bpp,
 		    FXP_Q4_ARGS(pipe_config->dsc.compressed_bpp_x16),
-		    pipe_config->dsc.slice_count);
+		    intel_dsc_line_slice_count(&pipe_config->dsc.slice_config));
 
 	return 0;
 }
diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c b/drivers/gpu/drm/i915/display/intel_vdsc.c
index 2b27671f97b32..190ce567bc7fa 100644
--- a/drivers/gpu/drm/i915/display/intel_vdsc.c
+++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
@@ -283,8 +283,9 @@ int intel_dsc_compute_params(struct intel_crtc_state *pipe_config)
 	int ret;
 
 	vdsc_cfg->pic_width = pipe_config->hw.adjusted_mode.crtc_hdisplay;
-	vdsc_cfg->slice_width = DIV_ROUND_UP(vdsc_cfg->pic_width,
-					     pipe_config->dsc.slice_count);
+	vdsc_cfg->slice_width =
+		DIV_ROUND_UP(vdsc_cfg->pic_width,
+			     intel_dsc_line_slice_count(&pipe_config->dsc.slice_config));
 
 	err = intel_dsc_slice_dimensions_valid(pipe_config, vdsc_cfg);
 
@@ -1042,7 +1043,7 @@ static void intel_vdsc_dump_state(struct drm_printer *p, int indent,
 	drm_printf_indent(p, indent,
 			  "dsc-dss: compressed-bpp:" FXP_Q4_FMT ", slice-count: %d, num_streams: %d\n",
 			  FXP_Q4_ARGS(crtc_state->dsc.compressed_bpp_x16),
-			  crtc_state->dsc.slice_count,
+			  intel_dsc_line_slice_count(&crtc_state->dsc.slice_config),
 			  crtc_state->dsc.slice_config.streams_per_pipe);
 }
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 42/50] drm/i915/dp: Factor out intel_dp_dsc_min_slice_count()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (40 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 41/50] drm/i915/dsc: Switch to using intel_dsc_line_slice_count() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09 17:26   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 43/50] drm/i915/dp: Use int for DSC slice count variables Imre Deak
                   ` (12 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Factor out intel_dp_dsc_min_slice_count() for making
intel_dp_dsc_get_slice_count() more readable and also to prepare for a
follow-up change unifying the eDP and DP slice count/config computation.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 126048c5233c4..79b87bc041a75 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -958,14 +958,11 @@ u32 get_max_compressed_bpp_with_joiner(struct intel_display *display,
 	return max_bpp;
 }
 
-u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
-				int mode_clock, int mode_hdisplay,
-				int num_joined_pipes)
+static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
+					int mode_clock, int mode_hdisplay)
 {
 	struct intel_display *display = to_intel_display(connector);
-	u32 sink_slice_count_mask =
-		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
-	u8 min_slice_count, i;
+	u8 min_slice_count;
 	int max_slice_width;
 	int tp_rgb_yuv444;
 	int tp_yuv422_420;
@@ -1024,6 +1021,20 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 				DIV_ROUND_UP(mode_hdisplay,
 					     max_slice_width));
 
+	return min_slice_count;
+}
+
+u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
+				int mode_clock, int mode_hdisplay,
+				int num_joined_pipes)
+{
+	struct intel_display *display = to_intel_display(connector);
+	int min_slice_count =
+		intel_dp_dsc_min_slice_count(connector, mode_clock, mode_hdisplay);
+	u32 sink_slice_count_mask =
+		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
+	int i;
+
 	/* Find the closest match to the valid slice count values */
 	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
 		u8 test_slice_count = valid_dsc_slicecount[i] * num_joined_pipes;
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 43/50] drm/i915/dp: Use int for DSC slice count variables
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (41 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 42/50] drm/i915/dp: Factor out intel_dp_dsc_min_slice_count() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09 17:30   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 44/50] drm/i915/dp: Rename test_slice_count to slices_per_line Imre Deak
                   ` (11 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

There is no reason to use the more specific u8 type for slice count
variables, use the more generic int type instead.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 79b87bc041a75..1d9a130bd4060 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -962,7 +962,7 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 					int mode_clock, int mode_hdisplay)
 {
 	struct intel_display *display = to_intel_display(connector);
-	u8 min_slice_count;
+	int min_slice_count;
 	int max_slice_width;
 	int tp_rgb_yuv444;
 	int tp_yuv422_420;
@@ -1007,7 +1007,7 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 	 * slice and VDSC engine, whenever we approach close enough to max CDCLK
 	 */
 	if (mode_clock >= ((display->cdclk.max_cdclk_freq * 85) / 100))
-		min_slice_count = max_t(u8, min_slice_count, 2);
+		min_slice_count = max(min_slice_count, 2);
 
 	max_slice_width = drm_dp_dsc_sink_max_slice_width(connector->dp.dsc_dpcd);
 	if (max_slice_width < DP_DSC_MIN_SLICE_WIDTH_VALUE) {
@@ -1017,9 +1017,8 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 		return 0;
 	}
 	/* Also take into account max slice width */
-	min_slice_count = max_t(u8, min_slice_count,
-				DIV_ROUND_UP(mode_hdisplay,
-					     max_slice_width));
+	min_slice_count = max(min_slice_count,
+			      DIV_ROUND_UP(mode_hdisplay, max_slice_width));
 
 	return min_slice_count;
 }
@@ -1037,7 +1036,7 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 
 	/* Find the closest match to the valid slice count values */
 	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
-		u8 test_slice_count = valid_dsc_slicecount[i] * num_joined_pipes;
+		int test_slice_count = valid_dsc_slicecount[i] * num_joined_pipes;
 
 		/*
 		 * 3 DSC Slices per pipe need 3 DSC engines, which is supported only
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 44/50] drm/i915/dp: Rename test_slice_count to slices_per_line
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (42 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 43/50] drm/i915/dp: Use int for DSC slice count variables Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-09 17:34   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 45/50] drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe iteration Imre Deak
                   ` (10 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Rename test_slice_count to slices_per_line for clarity.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 1d9a130bd4060..650b339fd73bc 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1036,7 +1036,7 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 
 	/* Find the closest match to the valid slice count values */
 	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
-		int test_slice_count = valid_dsc_slicecount[i] * num_joined_pipes;
+		int slices_per_line = valid_dsc_slicecount[i] * num_joined_pipes;
 
 		/*
 		 * 3 DSC Slices per pipe need 3 DSC engines, which is supported only
@@ -1046,7 +1046,7 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
 			continue;
 
-		if (!(drm_dp_dsc_slice_count_to_mask(test_slice_count) &
+		if (!(drm_dp_dsc_slice_count_to_mask(slices_per_line) &
 		      sink_slice_count_mask))
 			continue;
 
@@ -1058,11 +1058,11 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		if (num_joined_pipes > 1 && valid_dsc_slicecount[i] < 2)
 			continue;
 
-		if (mode_hdisplay % test_slice_count)
+		if (mode_hdisplay % slices_per_line)
 			continue;
 
-		if (min_slice_count <= test_slice_count)
-			return test_slice_count;
+		if (min_slice_count <= slices_per_line)
+			return slices_per_line;
 	}
 
 	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 45/50] drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe iteration
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (43 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 44/50] drm/i915/dp: Rename test_slice_count to slices_per_line Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-10 12:38   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 46/50] drm/i915/dsc: Add intel_dsc_get_slice_config() Imre Deak
                   ` (9 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Simplify the slice config loop in intel_dp_dsc_get_slice_count(), using
the loop iterator as the slices-per-pipe value directly, instead of
looking up the same value from an array.

While at it move the code comment about the slice configuration closer
to where the configuration is determined and clarify it a bit.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 33 ++++++++++---------------
 1 file changed, 13 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 650b339fd73bc..a4ff1ffc8f7d4 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -107,20 +107,6 @@
 /* Constants for DP DSC configurations */
 static const u8 valid_dsc_bpp[] = {6, 8, 10, 12, 15};
 
-/*
- * With Single pipe configuration, HW is capable of supporting maximum of:
- * 2 slices per line for ICL, BMG
- * 4 slices per line for other platforms.
- * For now consider a max of 2 slices per line, which works for all platforms.
- * With this we can have max of 4 DSC Slices per pipe.
- *
- * For higher resolutions where 12 slice support is required with
- * ultrajoiner, only then each pipe can support 3 slices.
- *
- * #TODO Split this better to use 4 slices/dsc engine where supported.
- */
-static const u8 valid_dsc_slicecount[] = {1, 2, 3, 4};
-
 /**
  * intel_dp_is_edp - is the given port attached to an eDP panel (either CPU or PCH)
  * @intel_dp: DP struct
@@ -1032,17 +1018,24 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		intel_dp_dsc_min_slice_count(connector, mode_clock, mode_hdisplay);
 	u32 sink_slice_count_mask =
 		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
-	int i;
+	int slices_per_pipe;
 
-	/* Find the closest match to the valid slice count values */
-	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
-		int slices_per_line = valid_dsc_slicecount[i] * num_joined_pipes;
+	/*
+	 * Find the closest match to the valid slice count values
+	 *
+	 * Max HW DSC-per-pipe x slice-per-DSC (= slice-per-pipe) capability:
+	 * ICL:  2x2
+	 * BMG:  2x2, or for ultrajoined 4 pipes: 3x1
+	 * TGL+: 2x4 (TODO: Add support for this)
+	 */
+	for (slices_per_pipe = 1; slices_per_pipe <= 4; slices_per_pipe++) {
+		int slices_per_line = slices_per_pipe * num_joined_pipes;
 
 		/*
 		 * 3 DSC Slices per pipe need 3 DSC engines, which is supported only
 		 * with Ultrajoiner only for some platforms.
 		 */
-		if (valid_dsc_slicecount[i] == 3 &&
+		if (slices_per_pipe == 3 &&
 		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
 			continue;
 
@@ -1055,7 +1048,7 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		  * So there should be at least 2 dsc slices per pipe,
 		  * whenever bigjoiner is enabled.
 		  */
-		if (num_joined_pipes > 1 && valid_dsc_slicecount[i] < 2)
+		if (num_joined_pipes > 1 && slices_per_pipe < 2)
 			continue;
 
 		if (mode_hdisplay % slices_per_line)
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 46/50] drm/i915/dsc: Add intel_dsc_get_slice_config()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (44 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 45/50] drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe iteration Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-10 14:06   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 47/50] drm/i915/dsi: Use intel_dsc_get_slice_config() Imre Deak
                   ` (8 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Add intel_dsc_get_slice_config() and move the logic to select a given
slice configuration to that function from the configuration loop in
intel_dp_dsc_get_slice_count(). The same functionality can be used by
other outputs like DSI as well, done as a follow-up.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c   | 22 ++++-------
 drivers/gpu/drm/i915/display/intel_vdsc.c | 48 +++++++++++++++++++++++
 drivers/gpu/drm/i915/display/intel_vdsc.h |  4 ++
 3 files changed, 59 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index a4ff1ffc8f7d4..461f80bd54cbf 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1029,28 +1029,20 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 	 * TGL+: 2x4 (TODO: Add support for this)
 	 */
 	for (slices_per_pipe = 1; slices_per_pipe <= 4; slices_per_pipe++) {
-		int slices_per_line = slices_per_pipe * num_joined_pipes;
+		struct intel_dsc_slice_config config;
+		int slices_per_line;
 
-		/*
-		 * 3 DSC Slices per pipe need 3 DSC engines, which is supported only
-		 * with Ultrajoiner only for some platforms.
-		 */
-		if (slices_per_pipe == 3 &&
-		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
+		if (!intel_dsc_get_slice_config(display,
+						num_joined_pipes, slices_per_pipe,
+						&config))
 			continue;
 
+		slices_per_line = intel_dsc_line_slice_count(&config);
+
 		if (!(drm_dp_dsc_slice_count_to_mask(slices_per_line) &
 		      sink_slice_count_mask))
 			continue;
 
-		 /*
-		  * Bigjoiner needs small joiner to be enabled.
-		  * So there should be at least 2 dsc slices per pipe,
-		  * whenever bigjoiner is enabled.
-		  */
-		if (num_joined_pipes > 1 && slices_per_pipe < 2)
-			continue;
-
 		if (mode_hdisplay % slices_per_line)
 			continue;
 
diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c b/drivers/gpu/drm/i915/display/intel_vdsc.c
index 190ce567bc7fa..9910134d52653 100644
--- a/drivers/gpu/drm/i915/display/intel_vdsc.c
+++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
@@ -40,6 +40,54 @@ int intel_dsc_line_slice_count(const struct intel_dsc_slice_config *config)
 	return config->pipes_per_line * config->streams_per_pipe * config->slices_per_stream;
 }
 
+bool intel_dsc_get_slice_config(struct intel_display *display,
+				int pipes_per_line, int slices_per_pipe,
+				struct intel_dsc_slice_config *config)
+{
+	int streams_per_pipe;
+
+	/* TODO: Add support for 8 slices per pipe on TGL+. */
+	switch (slices_per_pipe) {
+	case 3:
+		/*
+		 * 3 DSC Slices per pipe need 3 DSC engines, which is supported only
+		 * with Ultrajoiner only for some platforms.
+		 */
+		if (!HAS_DSC_3ENGINES(display) || pipes_per_line != 4)
+			return false;
+
+		streams_per_pipe = 3;
+		break;
+	case 4:
+		/* TODO: Consider using 1 DSC engine stream x 4 slices instead. */
+	case 2:
+		/* TODO: Consider using 1 DSC engine stream x 2 slices instead. */
+		streams_per_pipe = 2;
+		break;
+	case 1:
+		 /*
+		  * Bigjoiner needs small joiner to be enabled.
+		  * So there should be at least 2 dsc slices per pipe,
+		  * whenever bigjoiner is enabled.
+		  */
+		if (pipes_per_line > 1)
+			return false;
+
+		streams_per_pipe = 1;
+		break;
+	default:
+		MISSING_CASE(slices_per_pipe);
+		return false;
+	}
+
+	config->pipes_per_line = pipes_per_line;
+	config->streams_per_pipe = streams_per_pipe;
+	config->slices_per_stream = slices_per_pipe / streams_per_pipe;
+
+	return true;
+}
+
+
 static bool is_pipe_dsc(struct intel_crtc *crtc, enum transcoder cpu_transcoder)
 {
 	struct intel_display *display = to_intel_display(crtc);
diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.h b/drivers/gpu/drm/i915/display/intel_vdsc.h
index e61116d5297c8..aeb17670307b1 100644
--- a/drivers/gpu/drm/i915/display/intel_vdsc.h
+++ b/drivers/gpu/drm/i915/display/intel_vdsc.h
@@ -13,11 +13,15 @@ struct drm_printer;
 enum transcoder;
 struct intel_crtc;
 struct intel_crtc_state;
+struct intel_display;
 struct intel_dsc_slice_config;
 struct intel_encoder;
 
 bool intel_dsc_source_support(const struct intel_crtc_state *crtc_state);
 int intel_dsc_line_slice_count(const struct intel_dsc_slice_config *config);
+bool intel_dsc_get_slice_config(struct intel_display *display,
+				int num_joined_pipes, int slice_per_pipe,
+				struct intel_dsc_slice_config *config);
 void intel_uncompressed_joiner_enable(const struct intel_crtc_state *crtc_state);
 void intel_dsc_enable(const struct intel_crtc_state *crtc_state);
 void intel_dsc_disable(const struct intel_crtc_state *crtc_state);
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 47/50] drm/i915/dsi: Use intel_dsc_get_slice_config()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (45 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 46/50] drm/i915/dsc: Add intel_dsc_get_slice_config() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-10 14:44   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 48/50] drm/i915/dp: Unify DP and eDP slice count computation Imre Deak
                   ` (7 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Use intel_dsc_get_slice_config() for DSI to compute the slice
configuration based on the slices-per-line sink capability, instead of
open-coding the same.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_bios.c | 25 ++++++++++++-----------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_bios.c b/drivers/gpu/drm/i915/display/intel_bios.c
index 698e569a48e61..a7f02fa518d21 100644
--- a/drivers/gpu/drm/i915/display/intel_bios.c
+++ b/drivers/gpu/drm/i915/display/intel_bios.c
@@ -3486,12 +3486,13 @@ bool intel_bios_is_dsi_present(struct intel_display *display,
 	return false;
 }
 
-static void fill_dsc(struct intel_crtc_state *crtc_state,
+static bool fill_dsc(struct intel_crtc_state *crtc_state,
 		     struct dsc_compression_parameters_entry *dsc,
 		     int dsc_max_bpc)
 {
 	struct intel_display *display = to_intel_display(crtc_state);
 	struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
+	int slices_per_line;
 	int bpc = 8;
 
 	vdsc_cfg->dsc_version_major = dsc->version_major;
@@ -3520,24 +3521,24 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 	 *
 	 * FIXME: split only when necessary
 	 */
-	crtc_state->dsc.slice_config.pipes_per_line = 1;
-
 	if (dsc->slices_per_line & BIT(2)) {
-		crtc_state->dsc.slice_config.streams_per_pipe = 2;
-		crtc_state->dsc.slice_config.slices_per_stream = 2;
+		slices_per_line = 4;
 	} else if (dsc->slices_per_line & BIT(1)) {
-		crtc_state->dsc.slice_config.streams_per_pipe = 2;
-		crtc_state->dsc.slice_config.slices_per_stream = 1;
+		slices_per_line = 2;
 	} else {
 		/* FIXME */
 		if (!(dsc->slices_per_line & BIT(0)))
 			drm_dbg_kms(display->drm,
 				    "VBT: Unsupported DSC slice count for DSI\n");
 
-		crtc_state->dsc.slice_config.streams_per_pipe = 1;
-		crtc_state->dsc.slice_config.slices_per_stream = 1;
+		slices_per_line = 1;
 	}
 
+	if (drm_WARN_ON(display->drm,
+			!intel_dsc_get_slice_config(display, 1, slices_per_line,
+						    &crtc_state->dsc.slice_config)))
+		return false;
+
 	if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
 	    intel_dsc_line_slice_count(&crtc_state->dsc.slice_config) != 0)
 		drm_dbg_kms(display->drm,
@@ -3558,6 +3559,8 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
 	vdsc_cfg->block_pred_enable = dsc->block_prediction_enable;
 
 	vdsc_cfg->slice_height = dsc->slice_height;
+
+	return true;
 }
 
 /* FIXME: initially DSI specific */
@@ -3578,9 +3581,7 @@ bool intel_bios_get_dsc_params(struct intel_encoder *encoder,
 			if (!devdata->dsc)
 				return false;
 
-			fill_dsc(crtc_state, devdata->dsc, dsc_max_bpc);
-
-			return true;
+			return fill_dsc(crtc_state, devdata->dsc, dsc_max_bpc);
 		}
 	}
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 48/50] drm/i915/dp: Unify DP and eDP slice count computation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (46 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 47/50] drm/i915/dsi: Use intel_dsc_get_slice_config() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-11  6:48   ` Hogander, Jouni
  2025-11-27 17:50 ` [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config() Imre Deak
                   ` (6 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Unify the DP and eDP slices-per-line computation. Atm eDP simply returns
the maximum slices-per-line value supported by the sink, but using the
same helper function for both cases still makes sense, since a follow-up
change will compute the detailed slice config for both cases.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 50 ++++++++++++-------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 461f80bd54cbf..0db401ec0156f 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -948,11 +948,20 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 					int mode_clock, int mode_hdisplay)
 {
 	struct intel_display *display = to_intel_display(connector);
+	bool is_edp =
+		connector->base.connector_type == DRM_MODE_CONNECTOR_eDP;
 	int min_slice_count;
 	int max_slice_width;
 	int tp_rgb_yuv444;
 	int tp_yuv422_420;
 
+	/*
+	 * TODO: allow using less than the maximum number of slices
+	 * supported by the eDP sink, to allow using fewer DSC engines.
+	 */
+	if (is_edp)
+		return drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd, true);
+
 	/*
 	 * TODO: Use the throughput value specific to the actual RGB/YUV
 	 * format of the output.
@@ -1016,8 +1025,10 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 	struct intel_display *display = to_intel_display(connector);
 	int min_slice_count =
 		intel_dp_dsc_min_slice_count(connector, mode_clock, mode_hdisplay);
+	bool is_edp =
+		connector->base.connector_type == DRM_MODE_CONNECTOR_eDP;
 	u32 sink_slice_count_mask =
-		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
+		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, is_edp);
 	int slices_per_pipe;
 
 	/*
@@ -1470,9 +1481,13 @@ intel_dp_mode_valid(struct drm_connector *_connector,
 		if (intel_dp_is_edp(intel_dp)) {
 			dsc_max_compressed_bpp =
 				drm_edp_dsc_sink_output_bpp(connector->dp.dsc_dpcd) >> 4;
+
 			dsc_slice_count =
-				drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
-								true);
+				intel_dp_dsc_get_slice_count(connector,
+							     target_clock,
+							     mode->hdisplay,
+							     num_joined_pipes);
+
 			dsc = dsc_max_compressed_bpp && dsc_slice_count;
 		} else if (drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
 			unsigned long bw_overhead_flags = 0;
@@ -2380,28 +2395,13 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 	}
 
 	/* Calculate Slice count */
-	if (intel_dp_is_edp(intel_dp)) {
-		slices_per_line =
-			drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
-							true);
-		if (!slices_per_line) {
-			drm_dbg_kms(display->drm,
-				    "Unsupported Slice Count %d\n",
-				    slices_per_line);
-			return -EINVAL;
-		}
-	} else {
-		slices_per_line =
-			intel_dp_dsc_get_slice_count(connector,
-						     adjusted_mode->crtc_clock,
-						     adjusted_mode->crtc_hdisplay,
-						     num_joined_pipes);
-		if (!slices_per_line) {
-			drm_dbg_kms(display->drm,
-				    "Compressed Slice Count not supported\n");
-			return -EINVAL;
-		}
-	}
+	slices_per_line = intel_dp_dsc_get_slice_count(connector,
+						       adjusted_mode->crtc_clock,
+						       adjusted_mode->crtc_hdisplay,
+						       num_joined_pipes);
+	if (!slices_per_line)
+		return -EINVAL;
+
 	/*
 	 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
 	 * is greater than the maximum Cdclock and if slice count is even
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (47 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 48/50] drm/i915/dp: Unify DP and eDP slice count computation Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-11  6:55   ` Hogander, Jouni
  2025-12-12 18:17   ` [PATCH v2 " Imre Deak
  2025-11-27 17:50 ` [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config() Imre Deak
                   ` (5 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Add intel_dp_dsc_get_slice_config() to compute the detailed slice
configuration and determine the slices-per-line value (returned by
intel_dp_dsc_get_slice_count()) using this function.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 37 +++++++++++++++++++------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 0db401ec0156f..003f4b18c1175 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -971,10 +971,10 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 	 */
 	if (mode_clock > max(connector->dp.dsc_branch_caps.overall_throughput.rgb_yuv444,
 			     connector->dp.dsc_branch_caps.overall_throughput.yuv422_420))
-		return 0;
+		return false;
 
 	if (mode_hdisplay > connector->dp.dsc_branch_caps.max_line_width)
-		return 0;
+		return false;
 
 	/*
 	 * TODO: Pass the total pixel rate of all the streams transferred to
@@ -1009,7 +1009,7 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 		drm_dbg_kms(display->drm,
 			    "Unsupported slice width %d by DP DSC Sink device\n",
 			    max_slice_width);
-		return 0;
+		return false;
 	}
 	/* Also take into account max slice width */
 	min_slice_count = max(min_slice_count,
@@ -1018,9 +1018,11 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 	return min_slice_count;
 }
 
-u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
-				int mode_clock, int mode_hdisplay,
-				int num_joined_pipes)
+static bool
+intel_dp_dsc_get_slice_config(const struct intel_connector *connector,
+			      int mode_clock, int mode_hdisplay,
+			      int num_joined_pipes,
+			      struct intel_dsc_slice_config *config_ret)
 {
 	struct intel_display *display = to_intel_display(connector);
 	int min_slice_count =
@@ -1057,8 +1059,11 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		if (mode_hdisplay % slices_per_line)
 			continue;
 
-		if (min_slice_count <= slices_per_line)
-			return slices_per_line;
+		if (min_slice_count <= slices_per_line) {
+			*config_ret = config;
+
+			return true;
+		}
 	}
 
 	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
@@ -1069,7 +1074,21 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		    min_slice_count,
 		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
 
-	return 0;
+	return false;
+}
+
+u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
+				int mode_clock, int mode_hdisplay,
+				int num_joined_pipes)
+{
+	struct intel_dsc_slice_config config;
+
+	if (!intel_dp_dsc_get_slice_config(connector,
+					   mode_clock, mode_hdisplay,
+					   num_joined_pipes, &config))
+		return 0;
+
+	return intel_dsc_line_slice_count(&config);
 }
 
 static bool source_can_output(struct intel_dp *intel_dp,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config()
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (48 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config() Imre Deak
@ 2025-11-27 17:50 ` Imre Deak
  2025-12-11  6:59   ` Hogander, Jouni
  2025-12-12 18:17   ` [PATCH v2 " Imre Deak
  2025-11-28 16:20 ` [CI 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation Imre Deak
                   ` (4 subsequent siblings)
  54 siblings, 2 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-27 17:50 UTC (permalink / raw)
  To: intel-gfx, intel-xe

Simplify things by computing the detailed slice configuration using
intel_dp_dsc_get_slice_config(), instead of open-coding the same.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 35 +++----------------------
 1 file changed, 3 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 003f4b18c1175..d41c75c6f7831 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2387,7 +2387,6 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		&pipe_config->hw.adjusted_mode;
 	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
 	bool is_mst = intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST);
-	int slices_per_line;
 	int ret;
 
 	/*
@@ -2413,39 +2412,11 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		}
 	}
 
-	/* Calculate Slice count */
-	slices_per_line = intel_dp_dsc_get_slice_count(connector,
-						       adjusted_mode->crtc_clock,
-						       adjusted_mode->crtc_hdisplay,
-						       num_joined_pipes);
-	if (!slices_per_line)
+	if (!intel_dp_dsc_get_slice_config(connector, adjusted_mode->crtc_clock,
+					   adjusted_mode->crtc_hdisplay, num_joined_pipes,
+					   &pipe_config->dsc.slice_config))
 		return -EINVAL;
 
-	/*
-	 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
-	 * is greater than the maximum Cdclock and if slice count is even
-	 * then we need to use 2 VDSC instances.
-	 * In case of Ultrajoiner along with 12 slices we need to use 3
-	 * VDSC instances.
-	 */
-	pipe_config->dsc.slice_config.pipes_per_line = num_joined_pipes;
-
-	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
-	    slices_per_line == 12)
-		pipe_config->dsc.slice_config.streams_per_pipe = 3;
-	else if (pipe_config->joiner_pipes || slices_per_line > 1)
-		pipe_config->dsc.slice_config.streams_per_pipe = 2;
-	else
-		pipe_config->dsc.slice_config.streams_per_pipe = 1;
-
-	pipe_config->dsc.slice_config.slices_per_stream =
-		slices_per_line /
-		pipe_config->dsc.slice_config.pipes_per_line /
-		pipe_config->dsc.slice_config.streams_per_pipe;
-
-	drm_WARN_ON(display->drm,
-		    intel_dsc_line_slice_count(&pipe_config->dsc.slice_config) != slices_per_line);
-
 	ret = intel_dp_dsc_compute_params(connector, pipe_config);
 	if (ret < 0) {
 		drm_dbg_kms(display->drm,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [CI 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (49 preceding siblings ...)
  2025-11-27 17:50 ` [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config() Imre Deak
@ 2025-11-28 16:20 ` Imre Deak
  2025-12-12 13:23   ` Govindapillai, Vinod
  2025-11-28 18:48 ` ✗ i915.CI.BAT: failure for drm/i915/dp: Clean up link BW/DSC slice config computation Patchwork
                   ` (3 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-11-28 16:20 UTC (permalink / raw)
  To: intel-gfx

Use intel_dp_effective_data_rate() to calculate the required link BW for
compressed streams on non-UHBR DP-SST links. This ensures that the BW is
calculated the same way for all DP output types and DSC/non-DSC modes,
during mode validation as well as during state computation.

This approach also allows for accounting with BW overhead due to DSC,
FEC being enabled on a link. Acounting for these will be added by
follow-up changes.

Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 27 +++++++++++++++----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index aa55a81a9a9bf..4044bdbceaef5 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -2025,15 +2025,19 @@ static bool intel_dp_dsc_supports_format(const struct intel_connector *connector
 	return drm_dp_dsc_sink_supports_format(connector->dp.dsc_dpcd, sink_dsc_format);
 }
 
-static bool is_bw_sufficient_for_dsc_config(int dsc_bpp_x16, u32 link_clock,
-					    u32 lane_count, u32 mode_clock,
-					    enum intel_output_format output_format,
-					    int timeslots)
+static bool is_bw_sufficient_for_dsc_config(struct intel_dp *intel_dp,
+					    int link_clock, int lane_count,
+					    int mode_clock, int mode_hdisplay,
+					    int dsc_slice_count, int link_bpp_x16,
+					    unsigned long bw_overhead_flags)
 {
-	u32 available_bw, required_bw;
+	int available_bw;
+	int required_bw;
 
-	available_bw = (link_clock * lane_count * timeslots * 16)  / 8;
-	required_bw = dsc_bpp_x16 * (intel_dp_mode_to_fec_clock(mode_clock));
+	available_bw = intel_dp_max_link_data_rate(intel_dp, link_clock, lane_count);
+	required_bw = intel_dp_link_required(link_clock, lane_count,
+					     mode_clock, mode_hdisplay,
+					     link_bpp_x16, bw_overhead_flags);
 
 	return available_bw >= required_bw;
 }
@@ -2081,11 +2085,12 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
 				if (ret)
 					continue;
 			} else {
-				if (!is_bw_sufficient_for_dsc_config(dsc_bpp_x16, link_rate,
-								     lane_count,
+				if (!is_bw_sufficient_for_dsc_config(intel_dp,
+								     link_rate, lane_count,
 								     adjusted_mode->crtc_clock,
-								     pipe_config->output_format,
-								     timeslots))
+								     adjusted_mode->hdisplay,
+								     pipe_config->dsc.slice_count,
+								     dsc_bpp_x16, 0))
 					continue;
 			}
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* ✗ i915.CI.BAT: failure for drm/i915/dp: Clean up link BW/DSC slice config computation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (50 preceding siblings ...)
  2025-11-28 16:20 ` [CI 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation Imre Deak
@ 2025-11-28 18:48 ` Patchwork
  2025-11-28 20:49   ` Imre Deak
  2025-12-01  9:46 ` ✗ i915.CI.Full: " Patchwork
                   ` (2 subsequent siblings)
  54 siblings, 1 reply; 137+ messages in thread
From: Patchwork @ 2025-11-28 18:48 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 4295 bytes --]

== Series Details ==

Series: drm/i915/dp: Clean up link BW/DSC slice config computation
URL   : https://patchwork.freedesktop.org/series/158180/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_17607 -> Patchwork_158180v1
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_158180v1 absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_158180v1, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/index.html

Participating hosts (45 -> 44)
------------------------------

  Missing    (1): fi-snb-2520m 

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_158180v1:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live@dmabuf:
    - bat-arlh-3:         [PASS][1] -> [INCOMPLETE][2]
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-arlh-3/igt@i915_selftest@live@dmabuf.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-arlh-3/igt@i915_selftest@live@dmabuf.html

  
Known issues
------------

  Here are the changes found in Patchwork_158180v1 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live:
    - bat-mtlp-8:         [PASS][3] -> [DMESG-FAIL][4] ([i915#12061]) +1 other test dmesg-fail
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-mtlp-8/igt@i915_selftest@live.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-mtlp-8/igt@i915_selftest@live.html
    - bat-jsl-1:          [PASS][5] -> [DMESG-FAIL][6] ([i915#13774]) +1 other test dmesg-fail
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-jsl-1/igt@i915_selftest@live.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-jsl-1/igt@i915_selftest@live.html
    - bat-arlh-3:         [PASS][7] -> [INCOMPLETE][8] ([i915#14818] / [i915#14837])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-arlh-3/igt@i915_selftest@live.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-arlh-3/igt@i915_selftest@live.html

  * igt@i915_selftest@live@workarounds:
    - bat-dg2-9:          [PASS][9] -> [DMESG-FAIL][10] ([i915#12061]) +1 other test dmesg-fail
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-dg2-9/igt@i915_selftest@live@workarounds.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-dg2-9/igt@i915_selftest@live@workarounds.html
    - bat-mtlp-9:         [PASS][11] -> [DMESG-FAIL][12] ([i915#12061]) +1 other test dmesg-fail
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-mtlp-9/igt@i915_selftest@live@workarounds.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-mtlp-9/igt@i915_selftest@live@workarounds.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@workarounds:
    - bat-dg2-14:         [DMESG-FAIL][13] ([i915#12061]) -> [PASS][14] +1 other test pass
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-dg2-14/igt@i915_selftest@live@workarounds.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-dg2-14/igt@i915_selftest@live@workarounds.html

  
  [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061
  [i915#13774]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13774
  [i915#14818]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14818
  [i915#14837]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14837


Build changes
-------------

  * Linux: CI_DRM_17607 -> Patchwork_158180v1

  CI-20190529: 20190529
  CI_DRM_17607: 7fe1b006b65af67bc0ef5df53aedcd265be7fb19 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_8645: 8645
  Patchwork_158180v1: 7fe1b006b65af67bc0ef5df53aedcd265be7fb19 @ git://anongit.freedesktop.org/gfx-ci/linux

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/index.html

[-- Attachment #2: Type: text/html, Size: 5215 bytes --]

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: ✗ i915.CI.BAT: failure for drm/i915/dp: Clean up link BW/DSC slice config computation
  2025-11-28 18:48 ` ✗ i915.CI.BAT: failure for drm/i915/dp: Clean up link BW/DSC slice config computation Patchwork
@ 2025-11-28 20:49   ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-11-28 20:49 UTC (permalink / raw)
  To: I915-ci-infra; +Cc: intel-gfx

Hi CI team,

the failures are unrelated, see the details below, could you please
forward the patchset to full testing?

On Fri, Nov 28, 2025 at 06:48:28PM +0000, Patchwork wrote:
> == Series Details ==
> 
> Series: drm/i915/dp: Clean up link BW/DSC slice config computation
> URL   : https://patchwork.freedesktop.org/series/158180/
> State : failure
> 
> == Summary ==
> 
> CI Bug Log - changes from CI_DRM_17607 -> Patchwork_158180v1
> ====================================================
> 
> Summary
> -------
> 
>   **FAILURE**
> 
>   Serious unknown changes coming with Patchwork_158180v1 absolutely need to be
>   verified manually.
>   
>   If you think the reported changes have nothing to do with the changes
>   introduced in Patchwork_158180v1, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
>   to document this new failure mode, which will reduce false positives in CI.
> 
>   External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/index.html
> 
> Participating hosts (45 -> 44)
> ------------------------------
> 
>   Missing    (1): fi-snb-2520m 
> 
> Possible new issues
> -------------------
> 
>   Here are the unknown changes that may have been introduced in Patchwork_158180v1:
> 
> ### IGT changes ###
> 
> #### Possible regressions ####
> 
>   * igt@i915_selftest@live@dmabuf:
>     - bat-arlh-3:         [PASS][1] -> [INCOMPLETE][2]
>    [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-arlh-3/igt@i915_selftest@live@dmabuf.html
>    [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-arlh-3/igt@i915_selftest@live@dmabuf.html

There is an HDMI and eDP output getting enabled on this system.

The HDMI output is not affected by the DP/DSC changes in the patchset.

The eDP output is affected by the changes, however all the modes/timings
getting enabled on the system in the Patchwork test runs match exactly
the modes/timings in the base CI_DRM_17607 test runs, so the changes in
the patchset - which could have an effect only on the modes/timings -
didn't result in functional changes.

There is no DP DSC modes getting enabled on the system.

Based on the above the failure is unrelated to the changes in the
patchset.

The issue is a system hang during a GPU live test.

The same kind of hang during the same test and on the same kind of ARLH
system happened before at least in:
https://intel-gfx-ci.01.org/tree/drm-tip/IGT_8645/bat-arlh-2/igt@i915_selftest@live.html

> Known issues
> ------------
> 
>   Here are the changes found in Patchwork_158180v1 that come from known issues:
> 
> ### IGT changes ###
> 
> #### Issues hit ####
> 
>   * igt@i915_selftest@live:
>     - bat-mtlp-8:         [PASS][3] -> [DMESG-FAIL][4] ([i915#12061]) +1 other test dmesg-fail
>    [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-mtlp-8/igt@i915_selftest@live.html
>    [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-mtlp-8/igt@i915_selftest@live.html
>     - bat-jsl-1:          [PASS][5] -> [DMESG-FAIL][6] ([i915#13774]) +1 other test dmesg-fail
>    [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-jsl-1/igt@i915_selftest@live.html
>    [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-jsl-1/igt@i915_selftest@live.html
>     - bat-arlh-3:         [PASS][7] -> [INCOMPLETE][8] ([i915#14818] / [i915#14837])
>    [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-arlh-3/igt@i915_selftest@live.html
>    [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-arlh-3/igt@i915_selftest@live.html
> 
>   * igt@i915_selftest@live@workarounds:
>     - bat-dg2-9:          [PASS][9] -> [DMESG-FAIL][10] ([i915#12061]) +1 other test dmesg-fail
>    [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-dg2-9/igt@i915_selftest@live@workarounds.html
>    [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-dg2-9/igt@i915_selftest@live@workarounds.html
>     - bat-mtlp-9:         [PASS][11] -> [DMESG-FAIL][12] ([i915#12061]) +1 other test dmesg-fail
>    [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-mtlp-9/igt@i915_selftest@live@workarounds.html
>    [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-mtlp-9/igt@i915_selftest@live@workarounds.html
> 
>   
> #### Possible fixes ####
> 
>   * igt@i915_selftest@live@workarounds:
>     - bat-dg2-14:         [DMESG-FAIL][13] ([i915#12061]) -> [PASS][14] +1 other test pass
>    [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/bat-dg2-14/igt@i915_selftest@live@workarounds.html
>    [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/bat-dg2-14/igt@i915_selftest@live@workarounds.html
> 
>   
>   [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061
>   [i915#13774]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13774
>   [i915#14818]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14818
>   [i915#14837]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14837
> 
> 
> Build changes
> -------------
> 
>   * Linux: CI_DRM_17607 -> Patchwork_158180v1
> 
>   CI-20190529: 20190529
>   CI_DRM_17607: 7fe1b006b65af67bc0ef5df53aedcd265be7fb19 @ git://anongit.freedesktop.org/gfx-ci/linux
>   IGT_8645: 8645
>   Patchwork_158180v1: 7fe1b006b65af67bc0ef5df53aedcd265be7fb19 @ git://anongit.freedesktop.org/gfx-ci/linux
> 
> == Logs ==
> 
> For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/index.html

^ permalink raw reply	[flat|nested] 137+ messages in thread

* ✗ i915.CI.Full: failure for drm/i915/dp: Clean up link BW/DSC slice config computation
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (51 preceding siblings ...)
  2025-11-28 18:48 ` ✗ i915.CI.BAT: failure for drm/i915/dp: Clean up link BW/DSC slice config computation Patchwork
@ 2025-12-01  9:46 ` Patchwork
  2025-12-12 20:01 ` ✓ i915.CI.BAT: success for drm/i915/dp: Clean up link BW/DSC slice config computation (rev3) Patchwork
  2025-12-13  4:00 ` ✓ i915.CI.Full: " Patchwork
  54 siblings, 0 replies; 137+ messages in thread
From: Patchwork @ 2025-12-01  9:46 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 80844 bytes --]

== Series Details ==

Series: drm/i915/dp: Clean up link BW/DSC slice config computation
URL   : https://patchwork.freedesktop.org/series/158180/
State : failure

== Summary ==

CI Bug Log - changes from CI_DRM_17607_full -> Patchwork_158180v1_full
====================================================

Summary
-------

  **FAILURE**

  Serious unknown changes coming with Patchwork_158180v1_full absolutely need to be
  verified manually.
  
  If you think the reported changes have nothing to do with the changes
  introduced in Patchwork_158180v1_full, please notify your bug team (I915-ci-infra@lists.freedesktop.org) to allow them
  to document this new failure mode, which will reduce false positives in CI.

  

Participating hosts (10 -> 10)
------------------------------

  No changes in participating hosts

Possible new issues
-------------------

  Here are the unknown changes that may have been introduced in Patchwork_158180v1_full:

### IGT changes ###

#### Possible regressions ####

  * igt@i915_selftest@live@gem_contexts:
    - shard-dg1:          [PASS][1] -> [INCOMPLETE][2] +1 other test incomplete
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg1-18/igt@i915_selftest@live@gem_contexts.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-18/igt@i915_selftest@live@gem_contexts.html

  * igt@kms_hdr@bpc-switch-suspend@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [INCOMPLETE][3]
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_hdr@bpc-switch-suspend@pipe-a-hdmi-a-2.html

  
#### Warnings ####

  * igt@kms_hdr@bpc-switch-suspend:
    - shard-rkl:          [SKIP][4] ([i915#3555] / [i915#8228]) -> [INCOMPLETE][5]
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@kms_hdr@bpc-switch-suspend.html
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_hdr@bpc-switch-suspend.html

  
New tests
---------

  New tests have been introduced between CI_DRM_17607_full and Patchwork_158180v1_full:

### New IGT tests (1) ###

  * igt@kms_hdr@static-toggle@pipe-a-hdmi-a-2:
    - Statuses : 1 pass(s)
    - Exec time: [0.92] s

  

Known issues
------------

  Here are the changes found in Patchwork_158180v1_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@gem_bad_reloc@negative-reloc-lut:
    - shard-rkl:          NOTRUN -> [SKIP][6] ([i915#3281])
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@gem_bad_reloc@negative-reloc-lut.html

  * igt@gem_basic@multigpu-create-close:
    - shard-dg2:          NOTRUN -> [SKIP][7] ([i915#7697])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@gem_basic@multigpu-create-close.html

  * igt@gem_ccs@block-multicopy-inplace:
    - shard-tglu-1:       NOTRUN -> [SKIP][8] ([i915#3555] / [i915#9323])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@gem_ccs@block-multicopy-inplace.html

  * igt@gem_close_race@multigpu-basic-process:
    - shard-tglu:         NOTRUN -> [SKIP][9] ([i915#7697])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@gem_close_race@multigpu-basic-process.html

  * igt@gem_ctx_isolation@preservation-s3@bcs0:
    - shard-tglu:         [PASS][10] -> [ABORT][11] ([i915#15317]) +1 other test abort
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-tglu-4/igt@gem_ctx_isolation@preservation-s3@bcs0.html
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-10/igt@gem_ctx_isolation@preservation-s3@bcs0.html
    - shard-mtlp:         [PASS][12] -> [ABORT][13] ([i915#15317]) +2 other tests abort
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-mtlp-6/igt@gem_ctx_isolation@preservation-s3@bcs0.html
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-4/igt@gem_ctx_isolation@preservation-s3@bcs0.html

  * igt@gem_ctx_isolation@preservation-s3@rcs0:
    - shard-glk:          NOTRUN -> [INCOMPLETE][14] ([i915#13356]) +1 other test incomplete
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk9/igt@gem_ctx_isolation@preservation-s3@rcs0.html

  * igt@gem_ctx_sseu@mmap-args:
    - shard-tglu-1:       NOTRUN -> [SKIP][15] ([i915#280])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@gem_ctx_sseu@mmap-args.html

  * igt@gem_exec_balancer@parallel-bb-first:
    - shard-tglu:         NOTRUN -> [SKIP][16] ([i915#4525])
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@gem_exec_balancer@parallel-bb-first.html

  * igt@gem_exec_capture@capture-invisible@smem0:
    - shard-glk:          NOTRUN -> [SKIP][17] ([i915#6334]) +1 other test skip
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk8/igt@gem_exec_capture@capture-invisible@smem0.html

  * igt@gem_exec_reloc@basic-wc-gtt:
    - shard-mtlp:         NOTRUN -> [SKIP][18] ([i915#3281]) +1 other test skip
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@gem_exec_reloc@basic-wc-gtt.html

  * igt@gem_exec_reloc@basic-write-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][19] ([i915#3281]) +2 other tests skip
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@gem_exec_reloc@basic-write-gtt.html

  * igt@gem_exec_suspend@basic-s4-devices@smem:
    - shard-dg2:          [PASS][20] -> [ABORT][21] ([i915#15317] / [i915#7975])
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-5/igt@gem_exec_suspend@basic-s4-devices@smem.html
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-8/igt@gem_exec_suspend@basic-s4-devices@smem.html

  * igt@gem_huc_copy@huc-copy:
    - shard-glk:          NOTRUN -> [SKIP][22] ([i915#2190])
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk9/igt@gem_huc_copy@huc-copy.html

  * igt@gem_lmem_swapping@heavy-random:
    - shard-mtlp:         NOTRUN -> [SKIP][23] ([i915#4613])
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@gem_lmem_swapping@heavy-random.html

  * igt@gem_lmem_swapping@verify:
    - shard-glk:          NOTRUN -> [SKIP][24] ([i915#4613])
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk6/igt@gem_lmem_swapping@verify.html

  * igt@gem_mmap_wc@invalid-flags:
    - shard-dg2:          NOTRUN -> [SKIP][25] ([i915#4083]) +1 other test skip
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@gem_mmap_wc@invalid-flags.html

  * igt@gem_mmap_wc@write-gtt-read-wc:
    - shard-mtlp:         NOTRUN -> [SKIP][26] ([i915#4083]) +1 other test skip
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@gem_mmap_wc@write-gtt-read-wc.html

  * igt@gem_pxp@reject-modify-context-protection-off-2:
    - shard-dg2:          NOTRUN -> [SKIP][27] ([i915#4270]) +1 other test skip
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@gem_pxp@reject-modify-context-protection-off-2.html

  * igt@gem_readwrite@read-bad-handle:
    - shard-rkl:          NOTRUN -> [SKIP][28] ([i915#3282]) +1 other test skip
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@gem_readwrite@read-bad-handle.html

  * igt@gem_render_copy@yf-tiled-ccs-to-y-tiled:
    - shard-dg2:          NOTRUN -> [SKIP][29] ([i915#5190] / [i915#8428]) +1 other test skip
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@gem_render_copy@yf-tiled-ccs-to-y-tiled.html

  * igt@gem_tiled_partial_pwrite_pread@reads:
    - shard-mtlp:         NOTRUN -> [SKIP][30] ([i915#4077]) +1 other test skip
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@gem_tiled_partial_pwrite_pread@reads.html
    - shard-dg2:          NOTRUN -> [SKIP][31] ([i915#4077]) +2 other tests skip
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@gem_tiled_partial_pwrite_pread@reads.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-glk:          NOTRUN -> [SKIP][32] ([i915#3323])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk6/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@map-fixed-invalidate-overlap-busy:
    - shard-dg2:          NOTRUN -> [SKIP][33] ([i915#3297] / [i915#4880])
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@gem_userptr_blits@map-fixed-invalidate-overlap-busy.html
    - shard-mtlp:         NOTRUN -> [SKIP][34] ([i915#3297])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@gem_userptr_blits@map-fixed-invalidate-overlap-busy.html

  * igt@gem_userptr_blits@unsync-unmap-after-close:
    - shard-tglu-1:       NOTRUN -> [SKIP][35] ([i915#3297])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@gem_userptr_blits@unsync-unmap-after-close.html

  * igt@gem_workarounds@suspend-resume-fd:
    - shard-glk:          NOTRUN -> [INCOMPLETE][36] ([i915#13356] / [i915#14586])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk1/igt@gem_workarounds@suspend-resume-fd.html

  * igt@gen3_render_tiledx_blits:
    - shard-dg2:          NOTRUN -> [SKIP][37] +5 other tests skip
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@gen3_render_tiledx_blits.html

  * igt@gen9_exec_parse@bb-chained:
    - shard-mtlp:         NOTRUN -> [SKIP][38] ([i915#2856])
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@gen9_exec_parse@bb-chained.html

  * igt@gen9_exec_parse@cmd-crossing-page:
    - shard-tglu-1:       NOTRUN -> [SKIP][39] ([i915#2527] / [i915#2856])
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@gen9_exec_parse@cmd-crossing-page.html

  * igt@gen9_exec_parse@secure-batches:
    - shard-tglu:         NOTRUN -> [SKIP][40] ([i915#2527] / [i915#2856])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@gen9_exec_parse@secure-batches.html

  * igt@gen9_exec_parse@shadow-peek:
    - shard-dg2:          NOTRUN -> [SKIP][41] ([i915#2856]) +1 other test skip
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@gen9_exec_parse@shadow-peek.html
    - shard-dg1:          NOTRUN -> [SKIP][42] ([i915#2527])
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@gen9_exec_parse@shadow-peek.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-tglu:         [PASS][43] -> [ABORT][44] ([i915#15342])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-tglu-6/igt@i915_module_load@reload-with-fault-injection.html
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-6/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_pm_freq_api@freq-suspend:
    - shard-tglu-1:       NOTRUN -> [SKIP][45] ([i915#8399])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@i915_pm_freq_api@freq-suspend.html

  * igt@i915_pm_rpm@system-suspend-devices:
    - shard-glk:          NOTRUN -> [ABORT][46] ([i915#15317]) +3 other tests abort
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk1/igt@i915_pm_rpm@system-suspend-devices.html

  * igt@i915_suspend@basic-s2idle-without-i915:
    - shard-rkl:          [PASS][47] -> [ABORT][48] ([i915#15317]) +2 other tests abort
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@i915_suspend@basic-s2idle-without-i915.html
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-7/igt@i915_suspend@basic-s2idle-without-i915.html

  * igt@i915_suspend@fence-restore-tiled2untiled:
    - shard-tglu-1:       NOTRUN -> [ABORT][49] ([i915#15317])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@i915_suspend@fence-restore-tiled2untiled.html

  * igt@i915_suspend@sysfs-reader:
    - shard-glk:          [PASS][50] -> [INCOMPLETE][51] ([i915#4817])
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-glk1/igt@i915_suspend@sysfs-reader.html
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk8/igt@i915_suspend@sysfs-reader.html

  * igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-2:
    - shard-glk:          NOTRUN -> [INCOMPLETE][52] ([i915#12761] / [i915#14995])
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk6/igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-2.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels:
    - shard-glk10:        NOTRUN -> [SKIP][53] ([i915#1769])
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk10/igt@kms_atomic_transition@plane-all-modeset-transition-fencing-internal-panels.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-180:
    - shard-tglu:         NOTRUN -> [SKIP][54] ([i915#5286])
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_big_fb@4-tiled-64bpp-rotate-180.html

  * igt@kms_big_fb@4-tiled-addfb-size-offset-overflow:
    - shard-tglu-1:       NOTRUN -> [SKIP][55] ([i915#5286])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_big_fb@4-tiled-addfb-size-offset-overflow.html

  * igt@kms_big_fb@linear-16bpp-rotate-270:
    - shard-mtlp:         NOTRUN -> [SKIP][56] +1 other test skip
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_big_fb@linear-16bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-90:
    - shard-dg2:          NOTRUN -> [SKIP][57] ([i915#4538] / [i915#5190]) +1 other test skip
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_big_fb@yf-tiled-32bpp-rotate-90.html

  * igt@kms_ccs@bad-aux-stride-y-tiled-gen12-mc-ccs@pipe-c-hdmi-a-1:
    - shard-rkl:          NOTRUN -> [SKIP][58] ([i915#14098] / [i915#6095]) +14 other tests skip
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@kms_ccs@bad-aux-stride-y-tiled-gen12-mc-ccs@pipe-c-hdmi-a-1.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs:
    - shard-tglu-1:       NOTRUN -> [SKIP][59] ([i915#12313])
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs:
    - shard-tglu-1:       NOTRUN -> [SKIP][60] ([i915#6095]) +14 other tests skip
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][61] ([i915#10307] / [i915#6095]) +61 other tests skip
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-6/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-mc-ccs@pipe-a-hdmi-a-3.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs@pipe-c-hdmi-a-1:
    - shard-tglu:         NOTRUN -> [SKIP][62] ([i915#6095]) +14 other tests skip
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs@pipe-c-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-rotation-180-yf-tiled-ccs@pipe-d-hdmi-a-1:
    - shard-dg2:          NOTRUN -> [SKIP][63] ([i915#10307] / [i915#10434] / [i915#6095]) +1 other test skip
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-4/igt@kms_ccs@crc-primary-rotation-180-yf-tiled-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][64] ([i915#6095]) +8 other tests skip
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_ccs@crc-primary-suspend-y-tiled-ccs.html
    - shard-glk10:        NOTRUN -> [ABORT][65] ([i915#15317]) +3 other tests abort
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk10/igt@kms_ccs@crc-primary-suspend-y-tiled-ccs.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-a-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [ABORT][66] ([i915#15317]) +1 other test abort
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-13/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc@pipe-a-hdmi-a-3.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs@pipe-b-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][67] ([i915#14544] / [i915#6095]) +3 other tests skip
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs@pipe-b-hdmi-a-2.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][68] ([i915#14098] / [i915#14544] / [i915#6095]) +1 other test skip
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs@pipe-c-hdmi-a-2.html

  * igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs@pipe-b-hdmi-a-1:
    - shard-rkl:          NOTRUN -> [SKIP][69] ([i915#6095]) +29 other tests skip
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@kms_ccs@missing-ccs-buffer-y-tiled-ccs@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs@pipe-a-edp-1:
    - shard-mtlp:         NOTRUN -> [SKIP][70] ([i915#6095]) +9 other tests skip
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_ccs@missing-ccs-buffer-yf-tiled-ccs@pipe-a-edp-1.html

  * igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][71] ([i915#12313])
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs.html

  * igt@kms_ccs@random-ccs-data-yf-tiled-ccs@pipe-a-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [SKIP][72] ([i915#6095]) +55 other tests skip
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-13/igt@kms_ccs@random-ccs-data-yf-tiled-ccs@pipe-a-hdmi-a-3.html

  * igt@kms_cdclk@plane-scaling@pipe-c-dp-3:
    - shard-dg2:          NOTRUN -> [SKIP][73] ([i915#13783]) +3 other tests skip
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_cdclk@plane-scaling@pipe-c-dp-3.html

  * igt@kms_chamelium_edid@hdmi-edid-change-during-suspend:
    - shard-tglu-1:       NOTRUN -> [SKIP][74] ([i915#11151] / [i915#7828]) +1 other test skip
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_chamelium_edid@hdmi-edid-change-during-suspend.html

  * igt@kms_chamelium_frames@dp-crc-fast:
    - shard-dg2:          NOTRUN -> [SKIP][75] ([i915#11151] / [i915#7828]) +2 other tests skip
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_chamelium_frames@dp-crc-fast.html

  * igt@kms_chamelium_hpd@hdmi-hpd-fast:
    - shard-tglu:         NOTRUN -> [SKIP][76] ([i915#11151] / [i915#7828])
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_chamelium_hpd@hdmi-hpd-fast.html

  * igt@kms_colorop@plane-xr24-xr24-bypass:
    - shard-tglu:         NOTRUN -> [SKIP][77] ([i915#15343]) +1 other test skip
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_colorop@plane-xr24-xr24-bypass.html

  * igt@kms_colorop@plane-xr24-xr24-srgb_eotf:
    - shard-mtlp:         NOTRUN -> [SKIP][78] ([i915#15343]) +1 other test skip
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_colorop@plane-xr24-xr24-srgb_eotf.html
    - shard-dg2:          NOTRUN -> [SKIP][79] ([i915#15343]) +1 other test skip
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_colorop@plane-xr24-xr24-srgb_eotf.html

  * igt@kms_colorop@plane-xr30-xr30-srgb_eotf:
    - shard-glk:          NOTRUN -> [SKIP][80] +183 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk9/igt@kms_colorop@plane-xr30-xr30-srgb_eotf.html

  * igt@kms_colorop@plane-xr30-xr30-srgb_eotf-srgb_inv_eotf:
    - shard-rkl:          NOTRUN -> [SKIP][81] ([i915#15343]) +1 other test skip
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_colorop@plane-xr30-xr30-srgb_eotf-srgb_inv_eotf.html

  * igt@kms_colorop@plane-xr30-xr30-srgb_inv_eotf:
    - shard-tglu-1:       NOTRUN -> [SKIP][82] ([i915#15343]) +1 other test skip
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_colorop@plane-xr30-xr30-srgb_inv_eotf.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-dg2:          NOTRUN -> [SKIP][83] ([i915#3299])
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@legacy:
    - shard-tglu:         NOTRUN -> [SKIP][84] ([i915#6944] / [i915#7116] / [i915#7118] / [i915#9424])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_content_protection@legacy.html

  * igt@kms_cursor_crc@cursor-random-512x170:
    - shard-tglu-1:       NOTRUN -> [SKIP][85] ([i915#13049])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_cursor_crc@cursor-random-512x170.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x512:
    - shard-dg2:          NOTRUN -> [SKIP][86] ([i915#13049])
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html

  * igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy:
    - shard-rkl:          NOTRUN -> [SKIP][87]
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_cursor_legacy@2x-cursor-vs-flip-legacy.html

  * igt@kms_cursor_legacy@2x-nonblocking-modeset-vs-cursor-atomic:
    - shard-dg2:          NOTRUN -> [SKIP][88] ([i915#13046] / [i915#5354]) +2 other tests skip
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_cursor_legacy@2x-nonblocking-modeset-vs-cursor-atomic.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-varying-size:
    - shard-mtlp:         NOTRUN -> [SKIP][89] ([i915#9809])
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_cursor_legacy@cursora-vs-flipb-varying-size.html

  * igt@kms_dirtyfb@psr-dirtyfb-ioctl:
    - shard-tglu-1:       NOTRUN -> [SKIP][90] ([i915#9723])
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html

  * igt@kms_dp_linktrain_fallback@dp-fallback:
    - shard-tglu:         NOTRUN -> [SKIP][91] ([i915#13707])
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_dp_linktrain_fallback@dp-fallback.html

  * igt@kms_dsc@dsc-basic:
    - shard-rkl:          NOTRUN -> [SKIP][92] ([i915#3555] / [i915#3840])
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_dsc@dsc-basic.html

  * igt@kms_dsc@dsc-fractional-bpp-with-bpc:
    - shard-dg2:          NOTRUN -> [SKIP][93] ([i915#3840])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html
    - shard-mtlp:         NOTRUN -> [SKIP][94] ([i915#3840])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-tglu-1:       NOTRUN -> [SKIP][95] ([i915#3555] / [i915#3840])
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_feature_discovery@psr2:
    - shard-tglu:         NOTRUN -> [SKIP][96] ([i915#658])
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_feature_discovery@psr2.html

  * igt@kms_flip@2x-flip-vs-dpms-off-vs-modeset-interruptible:
    - shard-tglu:         NOTRUN -> [SKIP][97] ([i915#3637] / [i915#9934]) +1 other test skip
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_flip@2x-flip-vs-dpms-off-vs-modeset-interruptible.html

  * igt@kms_flip@2x-flip-vs-panning:
    - shard-dg2:          NOTRUN -> [SKIP][98] ([i915#9934]) +1 other test skip
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_flip@2x-flip-vs-panning.html
    - shard-dg1:          NOTRUN -> [SKIP][99] ([i915#9934])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@kms_flip@2x-flip-vs-panning.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible:
    - shard-glk:          NOTRUN -> [INCOMPLETE][100] ([i915#12745] / [i915#4839])
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk9/igt@kms_flip@2x-flip-vs-suspend-interruptible.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-hdmi-a1-hdmi-a2:
    - shard-glk:          NOTRUN -> [INCOMPLETE][101] ([i915#4839])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk9/igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-hdmi-a1-hdmi-a2.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-upscaling:
    - shard-tglu:         NOTRUN -> [SKIP][102] ([i915#2672] / [i915#3555])
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-upscaling@pipe-a-valid-mode:
    - shard-tglu:         NOTRUN -> [SKIP][103] ([i915#2587] / [i915#2672])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling:
    - shard-tglu-1:       NOTRUN -> [SKIP][104] ([i915#2672] / [i915#3555])
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-tglu-1:       NOTRUN -> [SKIP][105] ([i915#2587] / [i915#2672])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-16bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling:
    - shard-dg2:          NOTRUN -> [SKIP][106] ([i915#2672] / [i915#3555] / [i915#5190])
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling@pipe-a-valid-mode:
    - shard-dg2:          NOTRUN -> [SKIP][107] ([i915#2672])
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-16bpp-ytile-upscaling@pipe-a-valid-mode.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-mmap-wc:
    - shard-dg2:          NOTRUN -> [SKIP][108] ([i915#8708]) +3 other tests skip
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-pri-shrfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw:
    - shard-dg2:          NOTRUN -> [SKIP][109] ([i915#5354]) +8 other tests skip
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_frontbuffer_tracking@fbc-2p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-mmap-cpu:
    - shard-tglu-1:       NOTRUN -> [SKIP][110] +13 other tests skip
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-gtt:
    - shard-dg1:          NOTRUN -> [SKIP][111] ([i915#8708])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-spr-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt:
    - shard-mtlp:         NOTRUN -> [SKIP][112] ([i915#1825]) +4 other tests skip
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-cpu:
    - shard-dg1:          NOTRUN -> [SKIP][113]
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-pri-indfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbc-suspend:
    - shard-rkl:          NOTRUN -> [ABORT][114] ([i915#15317]) +4 other tests abort
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_frontbuffer_tracking@fbc-suspend.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-gtt:
    - shard-tglu:         NOTRUN -> [SKIP][115] ([i915#15102])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-mmap-cpu:
    - shard-tglu-1:       NOTRUN -> [SKIP][116] ([i915#15102]) +5 other tests skip
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-spr-indfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-render:
    - shard-tglu:         NOTRUN -> [SKIP][117] +10 other tests skip
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbcpsr-tiling-y:
    - shard-dg2:          NOTRUN -> [SKIP][118] ([i915#10055])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_frontbuffer_tracking@fbcpsr-tiling-y.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc:
    - shard-dg2:          NOTRUN -> [SKIP][119] ([i915#15104])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-render:
    - shard-rkl:          NOTRUN -> [SKIP][120] ([i915#15102] / [i915#3023]) +1 other test skip
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-pgflip-blt:
    - shard-dg2:          NOTRUN -> [SKIP][121] ([i915#15102] / [i915#3458]) +3 other tests skip
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_frontbuffer_tracking@psr-1p-primscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-rte:
    - shard-glk10:        NOTRUN -> [SKIP][122] +134 other tests skip
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk10/igt@kms_frontbuffer_tracking@psr-1p-rte.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-pgflip-blt:
    - shard-rkl:          NOTRUN -> [SKIP][123] ([i915#1825]) +2 other tests skip
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-pgflip-blt.html

  * igt@kms_joiner@basic-force-ultra-joiner:
    - shard-tglu:         NOTRUN -> [SKIP][124] ([i915#12394])
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_joiner@basic-force-ultra-joiner.html

  * igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1:
    - shard-glk:          NOTRUN -> [INCOMPLETE][125] ([i915#12756] / [i915#13409] / [i915#13476]) +1 other test incomplete
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk8/igt@kms_pipe_crc_basic@suspend-read-crc@pipe-a-hdmi-a-1.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-c:
    - shard-mtlp:         NOTRUN -> [SKIP][126] ([i915#15329]) +4 other tests skip
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-c.html

  * igt@kms_pm_rpm@dpms-mode-unset-non-lpsp:
    - shard-rkl:          [PASS][127] -> [SKIP][128] ([i915#15073])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html
    - shard-mtlp:         NOTRUN -> [SKIP][129] ([i915#15073])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_pm_rpm@dpms-mode-unset-non-lpsp.html

  * igt@kms_pm_rpm@modeset-non-lpsp:
    - shard-dg2:          [PASS][130] -> [SKIP][131] ([i915#15073])
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-7/igt@kms_pm_rpm@modeset-non-lpsp.html
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-4/igt@kms_pm_rpm@modeset-non-lpsp.html
    - shard-rkl:          NOTRUN -> [SKIP][132] ([i915#15073])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@kms_pm_rpm@modeset-non-lpsp.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress:
    - shard-tglu-1:       NOTRUN -> [SKIP][133] ([i915#15073])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_pm_rpm@modeset-non-lpsp-stress.html

  * igt@kms_pm_rpm@system-suspend-idle:
    - shard-tglu:         NOTRUN -> [ABORT][134] ([i915#15317])
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_pm_rpm@system-suspend-idle.html

  * igt@kms_prime@d3hot:
    - shard-rkl:          NOTRUN -> [SKIP][135] ([i915#6524])
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_prime@d3hot.html

  * igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area:
    - shard-rkl:          NOTRUN -> [SKIP][136] ([i915#11520])
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@kms_psr2_sf@fbc-pr-plane-move-sf-dmg-area.html

  * igt@kms_psr2_sf@pr-plane-move-sf-dmg-area:
    - shard-glk10:        NOTRUN -> [SKIP][137] ([i915#11520]) +3 other tests skip
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk10/igt@kms_psr2_sf@pr-plane-move-sf-dmg-area.html
    - shard-tglu:         NOTRUN -> [SKIP][138] ([i915#11520])
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_psr2_sf@pr-plane-move-sf-dmg-area.html

  * igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area-big-fb:
    - shard-tglu-1:       NOTRUN -> [SKIP][139] ([i915#11520]) +2 other tests skip
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area-big-fb.html

  * igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-sf:
    - shard-dg2:          NOTRUN -> [SKIP][140] ([i915#11520]) +1 other test skip
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_psr2_sf@psr2-cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area:
    - shard-glk:          NOTRUN -> [SKIP][141] ([i915#11520]) +4 other tests skip
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-glk6/igt@kms_psr2_sf@psr2-overlay-primary-update-sf-dmg-area.html

  * igt@kms_psr@fbc-pr-primary-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][142] ([i915#1072] / [i915#9732]) +6 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@kms_psr@fbc-pr-primary-mmap-gtt.html

  * igt@kms_psr@fbc-psr-primary-mmap-cpu:
    - shard-dg1:          NOTRUN -> [SKIP][143] ([i915#1072] / [i915#9732]) +1 other test skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@kms_psr@fbc-psr-primary-mmap-cpu.html

  * igt@kms_psr@fbc-psr-primary-render:
    - shard-tglu-1:       NOTRUN -> [SKIP][144] ([i915#9732]) +3 other tests skip
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_psr@fbc-psr-primary-render.html

  * igt@kms_psr@pr-sprite-mmap-cpu:
    - shard-tglu:         NOTRUN -> [SKIP][145] ([i915#9732]) +4 other tests skip
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-8/igt@kms_psr@pr-sprite-mmap-cpu.html

  * igt@kms_setmode@basic:
    - shard-tglu:         [PASS][146] -> [FAIL][147] ([i915#15106]) +2 other tests fail
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-tglu-10/igt@kms_setmode@basic.html
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-5/igt@kms_setmode@basic.html

  * igt@kms_setmode@clone-exclusive-crtc:
    - shard-mtlp:         NOTRUN -> [SKIP][148] ([i915#3555] / [i915#8809])
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_setmode@clone-exclusive-crtc.html
    - shard-dg2:          NOTRUN -> [SKIP][149] ([i915#3555])
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_setmode@clone-exclusive-crtc.html

  * igt@kms_tiled_display@basic-test-pattern-with-chamelium:
    - shard-tglu-1:       NOTRUN -> [SKIP][150] ([i915#8623])
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_tiled_display@basic-test-pattern-with-chamelium.html

  * igt@kms_vblank@ts-continuation-dpms-suspend@pipe-a-dp-3:
    - shard-dg2:          NOTRUN -> [ABORT][151] ([i915#15317]) +2 other tests abort
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_vblank@ts-continuation-dpms-suspend@pipe-a-dp-3.html

  * igt@kms_vrr@flip-suspend:
    - shard-tglu-1:       NOTRUN -> [SKIP][152] ([i915#3555])
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-1/igt@kms_vrr@flip-suspend.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-mtlp:         NOTRUN -> [SKIP][153] ([i915#2437])
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@kms_writeback@writeback-invalid-parameters.html
    - shard-dg2:          NOTRUN -> [SKIP][154] ([i915#2437])
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_writeback@writeback-invalid-parameters.html

  * igt@perf@global-sseu-config-invalid:
    - shard-mtlp:         NOTRUN -> [SKIP][155] ([i915#7387])
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-3/igt@perf@global-sseu-config-invalid.html
    - shard-dg2:          NOTRUN -> [SKIP][156] ([i915#7387])
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@perf@global-sseu-config-invalid.html

  * igt@prime_vgem@fence-flip-hang:
    - shard-dg1:          NOTRUN -> [SKIP][157] ([i915#3708])
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@prime_vgem@fence-flip-hang.html
    - shard-dg2:          NOTRUN -> [SKIP][158] ([i915#3708])
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-5/igt@prime_vgem@fence-flip-hang.html

  
#### Possible fixes ####

  * igt@gem_ctx_isolation@preservation-s3@rcs0:
    - shard-tglu:         [ABORT][159] ([i915#15317]) -> [PASS][160] +1 other test pass
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-tglu-4/igt@gem_ctx_isolation@preservation-s3@rcs0.html
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-tglu-10/igt@gem_ctx_isolation@preservation-s3@rcs0.html

  * igt@gem_exec_suspend@basic-s4-devices@lmem0:
    - shard-dg2:          [ABORT][161] ([i915#15317] / [i915#7975]) -> [PASS][162]
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-5/igt@gem_exec_suspend@basic-s4-devices@lmem0.html
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-8/igt@gem_exec_suspend@basic-s4-devices@lmem0.html

  * igt@gem_lmem_swapping@smem-oom@lmem0:
    - shard-dg1:          [ABORT][163] -> [PASS][164] +1 other test pass
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg1-14/igt@gem_lmem_swapping@smem-oom@lmem0.html
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg1-14/igt@gem_lmem_swapping@smem-oom@lmem0.html

  * igt@gem_workarounds@suspend-resume:
    - shard-rkl:          [ABORT][165] ([i915#15317]) -> [PASS][166]
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-7/igt@gem_workarounds@suspend-resume.html
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@gem_workarounds@suspend-resume.html

  * igt@i915_module_load@load:
    - shard-rkl:          ([PASS][167], [PASS][168], [PASS][169], [PASS][170], [PASS][171], [PASS][172], [PASS][173], [PASS][174], [PASS][175], [PASS][176], [PASS][177], [PASS][178], [PASS][179], [PASS][180], [PASS][181], [PASS][182], [PASS][183], [PASS][184], [PASS][185], [PASS][186], [PASS][187], [SKIP][188], [PASS][189], [PASS][190]) ([i915#14785]) -> ([PASS][191], [PASS][192], [PASS][193], [PASS][194], [PASS][195], [PASS][196], [PASS][197], [PASS][198], [PASS][199], [PASS][200], [PASS][201], [PASS][202], [PASS][203], [PASS][204], [PASS][205], [PASS][206], [PASS][207], [PASS][208], [PASS][209], [PASS][210], [PASS][211], [PASS][212], [PASS][213], [PASS][214])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-1/igt@i915_module_load@load.html
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-1/igt@i915_module_load@load.html
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-2/igt@i915_module_load@load.html
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-2/igt@i915_module_load@load.html
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-2/igt@i915_module_load@load.html
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-3/igt@i915_module_load@load.html
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-3/igt@i915_module_load@load.html
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-3/igt@i915_module_load@load.html
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-4/igt@i915_module_load@load.html
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-4/igt@i915_module_load@load.html
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-4/igt@i915_module_load@load.html
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@i915_module_load@load.html
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@i915_module_load@load.html
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@i915_module_load@load.html
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@i915_module_load@load.html
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@i915_module_load@load.html
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@i915_module_load@load.html
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@i915_module_load@load.html
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-7/igt@i915_module_load@load.html
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-7/igt@i915_module_load@load.html
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-7/igt@i915_module_load@load.html
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@i915_module_load@load.html
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@i915_module_load@load.html
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@i915_module_load@load.html
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-4/igt@i915_module_load@load.html
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@i915_module_load@load.html
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-4/igt@i915_module_load@load.html
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@i915_module_load@load.html
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@i915_module_load@load.html
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@i915_module_load@load.html
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@i915_module_load@load.html
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@i915_module_load@load.html
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-3/igt@i915_module_load@load.html
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-3/igt@i915_module_load@load.html
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-4/igt@i915_module_load@load.html
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-7/igt@i915_module_load@load.html
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-7/igt@i915_module_load@load.html
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-1/igt@i915_module_load@load.html
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-7/igt@i915_module_load@load.html
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-3/igt@i915_module_load@load.html
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@i915_module_load@load.html
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@i915_module_load@load.html
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-2/igt@i915_module_load@load.html
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@i915_module_load@load.html
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@i915_module_load@load.html
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@i915_module_load@load.html
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-8/igt@i915_module_load@load.html
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-1/igt@i915_module_load@load.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-dg2:          [ABORT][215] ([i915#15342]) -> [PASS][216]
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-8/igt@i915_module_load@reload-with-fault-injection.html
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@i915_module_load@reload-with-fault-injection.html

  * igt@kms_async_flips@async-flip-suspend-resume@pipe-d-edp-1:
    - shard-mtlp:         [ABORT][217] ([i915#15317]) -> [PASS][218] +4 other tests pass
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-mtlp-5/igt@kms_async_flips@async-flip-suspend-resume@pipe-d-edp-1.html
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-5/igt@kms_async_flips@async-flip-suspend-resume@pipe-d-edp-1.html

  * igt@kms_dp_aux_dev:
    - shard-dg2:          [SKIP][219] ([i915#1257]) -> [PASS][220]
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-6/igt@kms_dp_aux_dev.html
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-11/igt@kms_dp_aux_dev.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-render:
    - shard-snb:          [SKIP][221] -> [PASS][222]
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-snb1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-render.html
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-snb4/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-cur-indfb-draw-render.html

  * igt@kms_hdr@static-toggle:
    - shard-rkl:          [SKIP][223] ([i915#3555] / [i915#8228]) -> [PASS][224]
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-3/igt@kms_hdr@static-toggle.html
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-1/igt@kms_hdr@static-toggle.html

  * igt@kms_pm_rpm@dpms-lpsp:
    - shard-dg2:          [SKIP][225] ([i915#15073]) -> [PASS][226]
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-3/igt@kms_pm_rpm@dpms-lpsp.html
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-4/igt@kms_pm_rpm@dpms-lpsp.html

  * igt@kms_setmode@basic@pipe-a-hdmi-a-1:
    - shard-rkl:          [FAIL][227] ([i915#15106]) -> [PASS][228] +2 other tests pass
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_setmode@basic@pipe-a-hdmi-a-1.html
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_setmode@basic@pipe-a-hdmi-a-1.html

  * igt@perf_pmu@busy-double-start@bcs0:
    - shard-mtlp:         [FAIL][229] ([i915#4349]) -> [PASS][230]
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-mtlp-4/igt@perf_pmu@busy-double-start@bcs0.html
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-mtlp-7/igt@perf_pmu@busy-double-start@bcs0.html

  * igt@perf_pmu@busy-double-start@vecs1:
    - shard-dg2:          [FAIL][231] ([i915#4349]) -> [PASS][232] +4 other tests pass
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-4/igt@perf_pmu@busy-double-start@vecs1.html
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-6/igt@perf_pmu@busy-double-start@vecs1.html

  
#### Warnings ####

  * igt@api_intel_bb@crc32:
    - shard-rkl:          [SKIP][233] ([i915#6230]) -> [SKIP][234] ([i915#14544] / [i915#6230])
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@api_intel_bb@crc32.html
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@api_intel_bb@crc32.html

  * igt@gem_basic@multigpu-create-close:
    - shard-rkl:          [SKIP][235] ([i915#14544] / [i915#7697]) -> [SKIP][236] ([i915#7697])
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@gem_basic@multigpu-create-close.html
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@gem_basic@multigpu-create-close.html

  * igt@gem_exec_reloc@basic-range-active:
    - shard-rkl:          [SKIP][237] ([i915#3281]) -> [SKIP][238] ([i915#14544] / [i915#3281]) +2 other tests skip
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@gem_exec_reloc@basic-range-active.html
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@gem_exec_reloc@basic-range-active.html

  * igt@gem_exec_reloc@basic-write-gtt:
    - shard-rkl:          [SKIP][239] ([i915#14544] / [i915#3281]) -> [SKIP][240] ([i915#3281]) +2 other tests skip
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@gem_exec_reloc@basic-write-gtt.html
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@gem_exec_reloc@basic-write-gtt.html

  * igt@gem_lmem_swapping@verify-ccs:
    - shard-rkl:          [SKIP][241] ([i915#14544] / [i915#4613]) -> [SKIP][242] ([i915#4613]) +1 other test skip
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@gem_lmem_swapping@verify-ccs.html
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@gem_lmem_swapping@verify-ccs.html

  * igt@gem_partial_pwrite_pread@writes-after-reads-uncached:
    - shard-rkl:          [SKIP][243] ([i915#3282]) -> [SKIP][244] ([i915#14544] / [i915#3282])
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@gem_partial_pwrite_pread@writes-after-reads-uncached.html
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@gem_partial_pwrite_pread@writes-after-reads-uncached.html

  * igt@gem_tiled_partial_pwrite_pread@reads:
    - shard-rkl:          [SKIP][245] ([i915#14544] / [i915#3282]) -> [SKIP][246] ([i915#3282])
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@gem_tiled_partial_pwrite_pread@reads.html
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@gem_tiled_partial_pwrite_pread@reads.html

  * igt@gen7_exec_parse@oacontrol-tracking:
    - shard-rkl:          [SKIP][247] -> [SKIP][248] ([i915#14544]) +3 other tests skip
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@gen7_exec_parse@oacontrol-tracking.html
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@gen7_exec_parse@oacontrol-tracking.html

  * igt@gen9_exec_parse@bb-chained:
    - shard-rkl:          [SKIP][249] ([i915#14544] / [i915#2527]) -> [SKIP][250] ([i915#2527])
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@gen9_exec_parse@bb-chained.html
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@gen9_exec_parse@bb-chained.html

  * igt@i915_pm_freq_api@freq-suspend@gt0:
    - shard-dg2:          [ABORT][251] ([i915#15317]) -> [INCOMPLETE][252] ([i915#13356] / [i915#13820]) +1 other test incomplete
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-3/igt@i915_pm_freq_api@freq-suspend@gt0.html
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-3/igt@i915_pm_freq_api@freq-suspend@gt0.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip:
    - shard-rkl:          [SKIP][253] ([i915#14544] / [i915#5286]) -> [SKIP][254] ([i915#5286])
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip.html
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-180-hflip.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip:
    - shard-rkl:          [SKIP][255] ([i915#5286]) -> [SKIP][256] ([i915#14544] / [i915#5286]) +1 other test skip
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-180-hflip.html

  * igt@kms_big_fb@linear-16bpp-rotate-270:
    - shard-rkl:          [SKIP][257] ([i915#14544] / [i915#3638]) -> [SKIP][258] ([i915#3638]) +1 other test skip
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_big_fb@linear-16bpp-rotate-270.html
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_big_fb@linear-16bpp-rotate-270.html

  * igt@kms_big_fb@x-tiled-16bpp-rotate-90:
    - shard-rkl:          [SKIP][259] ([i915#3638]) -> [SKIP][260] ([i915#14544] / [i915#3638])
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@kms_big_fb@x-tiled-16bpp-rotate-90.html
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_big_fb@x-tiled-16bpp-rotate-90.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs:
    - shard-rkl:          [SKIP][261] ([i915#14098] / [i915#14544] / [i915#6095]) -> [SKIP][262] ([i915#14098] / [i915#6095]) +3 other tests skip
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs.html
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_ccs@bad-rotation-90-4-tiled-mtl-mc-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs:
    - shard-rkl:          [SKIP][263] ([i915#14098] / [i915#6095]) -> [SKIP][264] ([i915#14098] / [i915#14544] / [i915#6095]) +1 other test skip
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs.html
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-mtl-mc-ccs.html

  * igt@kms_cdclk@plane-scaling:
    - shard-rkl:          [SKIP][265] ([i915#14544] / [i915#3742]) -> [SKIP][266] ([i915#3742])
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_cdclk@plane-scaling.html
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_cdclk@plane-scaling.html

  * igt@kms_chamelium_audio@hdmi-audio:
    - shard-rkl:          [SKIP][267] ([i915#11151] / [i915#7828]) -> [SKIP][268] ([i915#11151] / [i915#14544] / [i915#7828]) +1 other test skip
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_chamelium_audio@hdmi-audio.html
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_chamelium_audio@hdmi-audio.html

  * igt@kms_chamelium_frames@dp-crc-fast:
    - shard-rkl:          [SKIP][269] ([i915#11151] / [i915#14544] / [i915#7828]) -> [SKIP][270] ([i915#11151] / [i915#7828]) +1 other test skip
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_chamelium_frames@dp-crc-fast.html
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_chamelium_frames@dp-crc-fast.html

  * igt@kms_colorop@plane-xr24-xr24-ctm_3x4_bt709_dec_enc:
    - shard-rkl:          [SKIP][271] ([i915#14544] / [i915#15343]) -> [SKIP][272] ([i915#15343]) +1 other test skip
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_colorop@plane-xr24-xr24-ctm_3x4_bt709_dec_enc.html
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_colorop@plane-xr24-xr24-ctm_3x4_bt709_dec_enc.html

  * igt@kms_colorop@plane-xr30-xr30-ctm_3x4_bt709_dec:
    - shard-rkl:          [SKIP][273] ([i915#15343]) -> [SKIP][274] ([i915#14544] / [i915#15343])
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_colorop@plane-xr30-xr30-ctm_3x4_bt709_dec.html
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_colorop@plane-xr30-xr30-ctm_3x4_bt709_dec.html

  * igt@kms_content_protection@dp-mst-lic-type-0:
    - shard-rkl:          [SKIP][275] ([i915#3116]) -> [SKIP][276] ([i915#14544] / [i915#3116])
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-8/igt@kms_content_protection@dp-mst-lic-type-0.html
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_content_protection@dp-mst-lic-type-0.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-rkl:          [SKIP][277] ([i915#14544] / [i915#3116]) -> [SKIP][278] ([i915#3116])
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_content_protection@dp-mst-type-0.html
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@suspend-resume:
    - shard-dg2:          [FAIL][279] ([i915#7173]) -> [SKIP][280] ([i915#6944])
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-11/igt@kms_content_protection@suspend-resume.html
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-6/igt@kms_content_protection@suspend-resume.html

  * igt@kms_cursor_crc@cursor-onscreen-512x512:
    - shard-rkl:          [SKIP][281] ([i915#13049]) -> [SKIP][282] ([i915#13049] / [i915#14544])
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_cursor_crc@cursor-onscreen-512x512.html
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_cursor_crc@cursor-onscreen-512x512.html

  * igt@kms_cursor_crc@cursor-rapid-movement-512x512:
    - shard-rkl:          [SKIP][283] ([i915#13049] / [i915#14544]) -> [SKIP][284] ([i915#13049])
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_cursor_crc@cursor-rapid-movement-512x512.html

  * igt@kms_cursor_legacy@cursorb-vs-flipa-legacy:
    - shard-rkl:          [SKIP][285] ([i915#14544]) -> [SKIP][286] +4 other tests skip
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_cursor_legacy@cursorb-vs-flipa-legacy.html

  * igt@kms_dsc@dsc-fractional-bpp-with-bpc:
    - shard-rkl:          [SKIP][287] ([i915#14544] / [i915#3840]) -> [SKIP][288] ([i915#3840])
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_dsc@dsc-fractional-bpp-with-bpc.html

  * igt@kms_dsc@dsc-with-output-formats:
    - shard-rkl:          [SKIP][289] ([i915#3555] / [i915#3840]) -> [SKIP][290] ([i915#14544] / [i915#3555] / [i915#3840])
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_dsc@dsc-with-output-formats.html
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_dsc@dsc-with-output-formats.html

  * igt@kms_flip@2x-blocking-absolute-wf_vblank:
    - shard-rkl:          [SKIP][291] ([i915#14544] / [i915#9934]) -> [SKIP][292] ([i915#9934])
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_flip@2x-blocking-absolute-wf_vblank.html
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_flip@2x-blocking-absolute-wf_vblank.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling:
    - shard-rkl:          [SKIP][293] ([i915#14544] / [i915#2672] / [i915#3555]) -> [SKIP][294] ([i915#2672] / [i915#3555])
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling.html
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-valid-mode:
    - shard-rkl:          [SKIP][295] ([i915#14544] / [i915#2672]) -> [SKIP][296] ([i915#2672])
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-valid-mode.html
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tile-downscaling@pipe-a-valid-mode.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-plflip-blt:
    - shard-dg2:          [SKIP][297] ([i915#15102] / [i915#3458]) -> [SKIP][298] ([i915#10433] / [i915#15102] / [i915#3458])
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-dg2-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-plflip-blt.html
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-dg2-4/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-shrfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-rte:
    - shard-rkl:          [SKIP][299] ([i915#14544] / [i915#15102] / [i915#3023]) -> [SKIP][300] ([i915#15102] / [i915#3023]) +4 other tests skip
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-rte.html
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-1p-rte.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-pgflip-blt:
    - shard-rkl:          [SKIP][301] ([i915#1825]) -> [SKIP][302] ([i915#14544] / [i915#1825]) +5 other tests skip
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-pgflip-blt.html
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-primscrn-indfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-plflip-blt:
    - shard-rkl:          [SKIP][303] ([i915#14544] / [i915#1825]) -> [SKIP][304] ([i915#1825]) +9 other tests skip
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-plflip-blt.html
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-gtt:
    - shard-rkl:          [SKIP][305] ([i915#15102]) -> [SKIP][306] ([i915#14544] / [i915#15102]) +1 other test skip
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-gtt.html
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc:
    - shard-rkl:          [SKIP][307] ([i915#14544] / [i915#15102]) -> [SKIP][308] ([i915#15102])
   [307]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc.html
   [308]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-pwrite:
    - shard-rkl:          [SKIP][309] ([i915#15102] / [i915#3023]) -> [SKIP][310] ([i915#14544] / [i915#15102] / [i915#3023]) +3 other tests skip
   [309]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-pwrite.html
   [310]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-pwrite.html

  * igt@kms_joiner@invalid-modeset-force-big-joiner:
    - shard-rkl:          [SKIP][311] ([i915#10656] / [i915#12388]) -> [SKIP][312] ([i915#10656] / [i915#12388] / [i915#14544])
   [311]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_joiner@invalid-modeset-force-big-joiner.html
   [312]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_joiner@invalid-modeset-force-big-joiner.html

  * igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-a:
    - shard-rkl:          [SKIP][313] ([i915#14544] / [i915#15329]) -> [SKIP][314] ([i915#15329]) +7 other tests skip
   [313]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-a.html
   [314]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_plane_scaling@plane-downscale-factor-0-5-with-rotation@pipe-a.html

  * igt@kms_pm_backlight@fade-with-suspend:
    - shard-rkl:          [SKIP][315] ([i915#14544] / [i915#5354]) -> [SKIP][316] ([i915#5354])
   [315]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_pm_backlight@fade-with-suspend.html
   [316]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_pm_backlight@fade-with-suspend.html

  * igt@kms_pm_dc@dc3co-vpb-simulation:
    - shard-rkl:          [SKIP][317] ([i915#9685]) -> [SKIP][318] ([i915#14544] / [i915#9685])
   [317]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_pm_dc@dc3co-vpb-simulation.html
   [318]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_pm_dc@dc3co-vpb-simulation.html

  * igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-exceed-sf:
    - shard-rkl:          [SKIP][319] ([i915#11520]) -> [SKIP][320] ([i915#11520] / [i915#14544]) +1 other test skip
   [319]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-exceed-sf.html
   [320]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_psr2_sf@fbc-pr-cursor-plane-move-continuous-exceed-sf.html

  * igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area:
    - shard-rkl:          [SKIP][321] ([i915#11520] / [i915#14544]) -> [SKIP][322] ([i915#11520]) +1 other test skip
   [321]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area.html
   [322]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_psr2_sf@fbc-pr-primary-plane-update-sf-dmg-area.html

  * igt@kms_psr@pr-cursor-plane-move:
    - shard-rkl:          [SKIP][323] ([i915#1072] / [i915#9732]) -> [SKIP][324] ([i915#1072] / [i915#14544] / [i915#9732]) +3 other tests skip
   [323]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_psr@pr-cursor-plane-move.html
   [324]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_psr@pr-cursor-plane-move.html

  * igt@kms_psr@psr2-cursor-mmap-gtt:
    - shard-rkl:          [SKIP][325] ([i915#1072] / [i915#14544] / [i915#9732]) -> [SKIP][326] ([i915#1072] / [i915#9732]) +4 other tests skip
   [325]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_psr@psr2-cursor-mmap-gtt.html
   [326]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_psr@psr2-cursor-mmap-gtt.html

  * igt@kms_setmode@clone-exclusive-crtc:
    - shard-rkl:          [SKIP][327] ([i915#14544] / [i915#3555]) -> [SKIP][328] ([i915#3555])
   [327]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_setmode@clone-exclusive-crtc.html
   [328]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_setmode@clone-exclusive-crtc.html

  * igt@kms_writeback@writeback-fb-id-xrgb2101010:
    - shard-rkl:          [SKIP][329] ([i915#2437] / [i915#9412]) -> [SKIP][330] ([i915#14544] / [i915#2437] / [i915#9412])
   [329]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-5/igt@kms_writeback@writeback-fb-id-xrgb2101010.html
   [330]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-6/igt@kms_writeback@writeback-fb-id-xrgb2101010.html

  * igt@kms_writeback@writeback-invalid-parameters:
    - shard-rkl:          [SKIP][331] ([i915#14544] / [i915#2437]) -> [SKIP][332] ([i915#2437])
   [331]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17607/shard-rkl-6/igt@kms_writeback@writeback-invalid-parameters.html
   [332]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/shard-rkl-5/igt@kms_writeback@writeback-invalid-parameters.html

  
  {name}: This element is suppressed. This means it is ignored when computing
          the status of the difference (SUCCESS, WARNING, or FAILURE).

  [i915#10055]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10055
  [i915#10307]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10307
  [i915#10433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10433
  [i915#10434]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10434
  [i915#10656]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10656
  [i915#1072]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1072
  [i915#11151]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11151
  [i915#11520]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11520
  [i915#12313]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12313
  [i915#12388]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12388
  [i915#12394]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12394
  [i915#1257]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1257
  [i915#12745]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12745
  [i915#12756]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12756
  [i915#12761]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12761
  [i915#13046]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13046
  [i915#13049]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13049
  [i915#13356]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13356
  [i915#13409]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13409
  [i915#13476]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13476
  [i915#13707]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13707
  [i915#13783]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13783
  [i915#13820]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13820
  [i915#14098]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14098
  [i915#14544]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14544
  [i915#14586]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14586
  [i915#14785]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14785
  [i915#14995]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14995
  [i915#15073]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15073
  [i915#15102]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15102
  [i915#15104]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15104
  [i915#15106]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15106
  [i915#15317]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15317
  [i915#15329]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15329
  [i915#15342]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15342
  [i915#15343]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15343
  [i915#1769]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1769
  [i915#1825]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1825
  [i915#2190]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2190
  [i915#2437]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2437
  [i915#2527]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2527
  [i915#2587]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2587
  [i915#2672]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2672
  [i915#280]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/280
  [i915#2856]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2856
  [i915#3023]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3023
  [i915#3116]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3116
  [i915#3281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3282
  [i915#3297]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3299
  [i915#3323]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3323
  [i915#3458]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3458
  [i915#3555]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3555
  [i915#3637]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3638
  [i915#3708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3708
  [i915#3742]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3742
  [i915#3840]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3840
  [i915#4077]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4077
  [i915#4083]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4083
  [i915#4270]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4270
  [i915#4349]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4349
  [i915#4525]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4525
  [i915#4538]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4538
  [i915#4613]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4613
  [i915#4817]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4817
  [i915#4839]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4839
  [i915#4880]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4880
  [i915#5190]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5190
  [i915#5286]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5286
  [i915#5354]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5354
  [i915#6095]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6095
  [i915#6230]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6230
  [i915#6334]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6334
  [i915#6524]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6524
  [i915#658]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/658
  [i915#6944]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6944
  [i915#7116]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7116
  [i915#7118]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7118
  [i915#7173]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7173
  [i915#7387]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7387
  [i915#7697]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7697
  [i915#7828]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7828
  [i915#7975]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7975
  [i915#8228]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8228
  [i915#8399]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8399
  [i915#8428]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8428
  [i915#8623]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8623
  [i915#8708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8708
  [i915#8809]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8809
  [i915#9323]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9323
  [i915#9412]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9412
  [i915#9424]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9424
  [i915#9685]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9685
  [i915#9723]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9723
  [i915#9732]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9732
  [i915#9809]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9809
  [i915#9934]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9934


Build changes
-------------

  * Linux: CI_DRM_17607 -> Patchwork_158180v1

  CI-20190529: 20190529
  CI_DRM_17607: 7fe1b006b65af67bc0ef5df53aedcd265be7fb19 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_8645: 8645
  Patchwork_158180v1: 7fe1b006b65af67bc0ef5df53aedcd265be7fb19 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v1/index.html

[-- Attachment #2: Type: text/html, Size: 107000 bytes --]

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5
  2025-11-27 17:49 ` [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5 Imre Deak
@ 2025-12-08 11:24   ` Luca Coelho
  2025-12-08 12:36     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Luca Coelho @ 2025-12-08 11:24 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe; +Cc: dri-devel

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> eDP 1.5 supports all the slice counts reported via DP_DSC_SLICE_CAP_1,
> so adjust drm_dp_dsc_sink_max_slice_count() accordingly.
> 
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/display/drm_dp_helper.c | 41 +++++++++++--------------
>  1 file changed, 18 insertions(+), 23 deletions(-)
> 
> diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
> index f9fdf19de74a9..19564c1afba6c 100644
> --- a/drivers/gpu/drm/display/drm_dp_helper.c
> +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> @@ -2725,15 +2725,7 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
>  {
>  	u8 slice_cap1 = dsc_dpcd[DP_DSC_SLICE_CAP_1 - DP_DSC_SUPPORT];
>  
> -	if (is_edp) {
> -		/* For eDP, register DSC_SLICE_CAPABILITIES_1 gives slice count */
> -		if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> -			return 4;
> -		if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> -			return 2;
> -		if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> -			return 1;
> -	} else {
> +	if (!is_edp) {
>  		/* For DP, use values from DSC_SLICE_CAP_1 and DSC_SLICE_CAP2 */
>  		u8 slice_cap2 = dsc_dpcd[DP_DSC_SLICE_CAP_2 - DP_DSC_SUPPORT];
>  
> @@ -2743,22 +2735,25 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
>  			return 20;
>  		if (slice_cap2 & DP_DSC_16_PER_DP_DSC_SINK)
>  			return 16;
> -		if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
> -			return 12;
> -		if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
> -			return 10;
> -		if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
> -			return 8;
> -		if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
> -			return 6;
> -		if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> -			return 4;
> -		if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> -			return 2;
> -		if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> -			return 1;
>  	}
>  
> +	/* DP, eDP v1.5+ */
> +	if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
> +		return 12;
> +	if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
> +		return 10;
> +	if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
> +		return 8;
> +	if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
> +		return 6;
> +	/* DP, eDP v1.4+ */
> +	if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> +		return 4;
> +	if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> +		return 2;
> +	if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> +		return 1;
> +
>  	return 0;
>  }
>  EXPORT_SYMBOL(drm_dp_dsc_sink_max_slice_count);

I'm assuming you decided to ignore cases where, for instance,
DP_DSC_12_PER_DP_DSC_SINK would be set even though we're using eDP <
1.5, right?

The change looks good to me, I'm just wondering what would happen in
this case.

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5
  2025-12-08 11:24   ` Luca Coelho
@ 2025-12-08 12:36     ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-08 12:36 UTC (permalink / raw)
  To: Luca Coelho; +Cc: intel-gfx, intel-xe, dri-devel

On Mon, Dec 08, 2025 at 01:24:54PM +0200, Luca Coelho wrote:
> On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > eDP 1.5 supports all the slice counts reported via DP_DSC_SLICE_CAP_1,
> > so adjust drm_dp_dsc_sink_max_slice_count() accordingly.
> > 
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/display/drm_dp_helper.c | 41 +++++++++++--------------
> >  1 file changed, 18 insertions(+), 23 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
> > index f9fdf19de74a9..19564c1afba6c 100644
> > --- a/drivers/gpu/drm/display/drm_dp_helper.c
> > +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> > @@ -2725,15 +2725,7 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> >  {
> >  	u8 slice_cap1 = dsc_dpcd[DP_DSC_SLICE_CAP_1 - DP_DSC_SUPPORT];
> >  
> > -	if (is_edp) {
> > -		/* For eDP, register DSC_SLICE_CAPABILITIES_1 gives slice count */
> > -		if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> > -			return 4;
> > -		if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> > -			return 2;
> > -		if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> > -			return 1;
> > -	} else {
> > +	if (!is_edp) {
> >  		/* For DP, use values from DSC_SLICE_CAP_1 and DSC_SLICE_CAP2 */
> >  		u8 slice_cap2 = dsc_dpcd[DP_DSC_SLICE_CAP_2 - DP_DSC_SUPPORT];
> >  
> > @@ -2743,22 +2735,25 @@ u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> >  			return 20;
> >  		if (slice_cap2 & DP_DSC_16_PER_DP_DSC_SINK)
> >  			return 16;
> > -		if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
> > -			return 12;
> > -		if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
> > -			return 10;
> > -		if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
> > -			return 8;
> > -		if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
> > -			return 6;
> > -		if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> > -			return 4;
> > -		if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> > -			return 2;
> > -		if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> > -			return 1;
> >  	}
> >  
> > +	/* DP, eDP v1.5+ */
> > +	if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
> > +		return 12;
> > +	if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
> > +		return 10;
> > +	if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
> > +		return 8;
> > +	if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
> > +		return 6;
> > +	/* DP, eDP v1.4+ */
> > +	if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> > +		return 4;
> > +	if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> > +		return 2;
> > +	if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> > +		return 1;
> > +
> >  	return 0;
> >  }
> >  EXPORT_SYMBOL(drm_dp_dsc_sink_max_slice_count);
> 
> I'm assuming you decided to ignore cases where, for instance,
> DP_DSC_12_PER_DP_DSC_SINK would be set even though we're using eDP <
> 1.5, right?

An eDP sink with a DPCD_REV version less than 1.5 may support slice
counts added by the eDP 1.5 Standard only, but still not indicate that
it's fully compliant with the 1.5 Standard by setting its DPCD_REV
register to 1.5 (since for instance it doesn't support some other
feature mandated by 1.5).

This also matches what the Windows driver does, i.e. it parses all the
slice counts in the DP_DSC_SLICE_CAP_1 register for an eDP panel with
1.4 in the panel's DPCD_REV register.

> The change looks good to me, I'm just wondering what would happen in
> this case.
> 
> Reviewed-by: Luca Coelho <luciano.coelho@intel.com>
> 
> --
> Cheers,
> Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 35/50] drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST
  2025-11-27 17:50 ` [PATCH 35/50] drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST Imre Deak
@ 2025-12-08 13:08   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-08 13:08 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> The minimum/maximum DSC input (i.e. pipe) and compressed (i.e. link)
> BPP
> limits are computed already in intel_dp_compute_config_limits(), so
> there is no need to do this again in
> mst_stream_dsc_compute_link_config() called later. Remove the
> corresponding alignments from the latter function and use the
> precomputed (aligned and within bounds) maximum pipe BPP and the
> min/max
> compressed BPP values instead as-is.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c | 48 +++----------------
> --
>  1 file changed, 6 insertions(+), 42 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index e3f8679e95252..24f8e60df9ac1 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -463,57 +463,21 @@ static int
> mst_stream_dsc_compute_link_config(struct intel_dp *intel_dp,
>  {
>  	struct intel_display *display = to_intel_display(intel_dp);
>  	struct intel_connector *connector =
> to_intel_connector(conn_state->connector);
> -	int num_bpc;
> -	u8 dsc_bpc[3] = {};
> -	int min_bpp, max_bpp, sink_min_bpp, sink_max_bpp;
> -	int min_compressed_bpp_x16, max_compressed_bpp_x16;
> -	int bpp_step_x16;
>  
> -	max_bpp = limits->pipe.max_bpp;
> -	min_bpp = limits->pipe.min_bpp;
> -
> -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector-
> >dp.dsc_dpcd,
> -						       dsc_bpc);
> -
> -	drm_dbg_kms(display->drm, "DSC Source supported min bpp %d
> max bpp %d\n",
> -		    min_bpp, max_bpp);
> -
> -	sink_min_bpp = min_array(dsc_bpc, num_bpc) * 3;
> -	sink_max_bpp = max_array(dsc_bpc, num_bpc) * 3;
> -
> -	drm_dbg_kms(display->drm, "DSC Sink supported min bpp %d max
> bpp %d\n",
> -		    sink_min_bpp, sink_max_bpp);
> -
> -	if (min_bpp < sink_min_bpp)
> -		min_bpp = sink_min_bpp;
> -
> -	if (max_bpp > sink_max_bpp)
> -		max_bpp = sink_max_bpp;
> -
> -	crtc_state->pipe_bpp = max_bpp;
> -
> -	min_compressed_bpp_x16 = limits->link.min_bpp_x16;
> -	max_compressed_bpp_x16 = limits->link.max_bpp_x16;
> +	crtc_state->pipe_bpp = limits->pipe.max_bpp;
>  
>  	drm_dbg_kms(display->drm,
>  		    "DSC Sink supported compressed min bpp "
> FXP_Q4_FMT " compressed max bpp " FXP_Q4_FMT "\n",
> -		    FXP_Q4_ARGS(min_compressed_bpp_x16),
> FXP_Q4_ARGS(max_compressed_bpp_x16));
> -
> -	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
> -
> -	max_compressed_bpp_x16 = min(max_compressed_bpp_x16,
> fxp_q4_from_int(crtc_state->pipe_bpp) - bpp_step_x16);
> -
> -	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
> -	min_compressed_bpp_x16 = round_up(min_compressed_bpp_x16,
> bpp_step_x16);
> -	max_compressed_bpp_x16 = round_down(max_compressed_bpp_x16,
> bpp_step_x16);
> +		    FXP_Q4_ARGS(limits->link.min_bpp_x16),
> FXP_Q4_ARGS(limits->link.max_bpp_x16));
>  
>  	crtc_state->lane_count = limits->max_lane_count;
>  	crtc_state->port_clock = limits->max_rate;
>  
>  	return intel_dp_mtp_tu_compute_config(intel_dp, crtc_state,
> conn_state,
> -					     
> min_compressed_bpp_x16,
> -					     
> max_compressed_bpp_x16,
> -					      bpp_step_x16, true);
> +					      limits-
> >link.min_bpp_x16,
> +					      limits-
> >link.max_bpp_x16,
> +					     
> intel_dp_dsc_bpp_step_x16(connector),
> +					      true);
>  }
>  
>  static int mode_hblank_period_ns(const struct drm_display_mode
> *mode)


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 36/50] drm/i915/dsc: Track the detaild DSC slice configuration
  2025-11-27 17:50 ` [PATCH 36/50] drm/i915/dsc: Track the detaild DSC slice configuration Imre Deak
@ 2025-12-09  8:24   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09  8:24 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Add a way to track the detailed DSC pipes-per-line, streams-per-pipe,
> slices-per-stream configuration instead of the current streams-per-
> pipe
> and slices-per-line value. This way describes the slice configuration
> in
> a clearer way, for instance providing a
> 
> 2 pipes-per-line x 2 streams-per-pipe x 2 slices-per-stream = 8
> slices-per-line
> 
> view, instead of the current, coarser
> 
> 2 streams-per-pipe, 8 slices-per-line
> 
> view, the former better reflecting that each DSC stream engine has 2
> slices. This also let's optimizing the configuration in a
> simpler/clearer way, for instance using 1 stream x 2 slices, or 1
> stream
> x 4 slices instead of the current 2 stream x 1 slice, or 2 streams x
> 2
> slices configuration (so that 1 DSC stream engine can be powered off
> in
> each pipe).
> 
> Follow-up changes will convert the current slices-per-line
> computation
> logic to compute instead the above detailed slice configuration.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_display_types.h | 5 +++++
>  drivers/gpu/drm/i915/display/intel_vdsc.c          | 5 +++++
>  drivers/gpu/drm/i915/display/intel_vdsc.h          | 2 ++
>  3 files changed, 12 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h
> b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 38702a9e0f508..a3de93cdcbde0 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -1306,6 +1306,11 @@ struct intel_crtc_state {
>  		bool compression_enabled_on_link;
>  		bool compression_enable;
>  		int num_streams;
> +		struct intel_dsc_slice_config {
> +			int pipes_per_line;
> +			int streams_per_pipe;
> +			int slices_per_stream;
> +		} slice_config;
>  		/* Compressed Bpp in U6.4 format (first 4 bits for
> fractional part) */
>  		u16 compressed_bpp_x16;
>  		u8 slice_count;
> diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c
> b/drivers/gpu/drm/i915/display/intel_vdsc.c
> index 0e727fc5e80c1..8aa480e3d1c9d 100644
> --- a/drivers/gpu/drm/i915/display/intel_vdsc.c
> +++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
> @@ -35,6 +35,11 @@ bool intel_dsc_source_support(const struct
> intel_crtc_state *crtc_state)
>  	return true;
>  }
>  
> +int intel_dsc_line_slice_count(const struct intel_dsc_slice_config
> *config)
> +{
> +	return config->pipes_per_line * config->streams_per_pipe *
> config->slices_per_stream;
> +}
> +
>  static bool is_pipe_dsc(struct intel_crtc *crtc, enum transcoder
> cpu_transcoder)
>  {
>  	struct intel_display *display = to_intel_display(crtc);
> diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.h
> b/drivers/gpu/drm/i915/display/intel_vdsc.h
> index 99f64ac54b273..e61116d5297c8 100644
> --- a/drivers/gpu/drm/i915/display/intel_vdsc.h
> +++ b/drivers/gpu/drm/i915/display/intel_vdsc.h
> @@ -13,9 +13,11 @@ struct drm_printer;
>  enum transcoder;
>  struct intel_crtc;
>  struct intel_crtc_state;
> +struct intel_dsc_slice_config;
>  struct intel_encoder;
>  
>  bool intel_dsc_source_support(const struct intel_crtc_state
> *crtc_state);
> +int intel_dsc_line_slice_count(const struct intel_dsc_slice_config
> *config);
>  void intel_uncompressed_joiner_enable(const struct intel_crtc_state
> *crtc_state);
>  void intel_dsc_enable(const struct intel_crtc_state *crtc_state);
>  void intel_dsc_disable(const struct intel_crtc_state *crtc_state);


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 37/50] drm/i915/dsc: Track the DSC stream count in the DSC slice config state
  2025-11-27 17:50 ` [PATCH 37/50] drm/i915/dsc: Track the DSC stream count in the DSC slice config state Imre Deak
@ 2025-12-09  8:28   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09  8:28 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Move the tracking for the DSC stream count from
> intel_crtc_state::dsc.num_streams to
> intel_crtc_state::dsc.slice_config.streams_per_pipe.
> 
> While at it add a TODO comment to read out the full DSC configuration
> from HW including the pipes-per-line and slices-per-stream values.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>
> ---
>  drivers/gpu/drm/i915/display/icl_dsi.c             |  4 ++--
>  drivers/gpu/drm/i915/display/intel_display.c       |  2 +-
>  drivers/gpu/drm/i915/display/intel_display_types.h |  1 -
>  drivers/gpu/drm/i915/display/intel_dp.c            |  6 +++---
>  drivers/gpu/drm/i915/display/intel_vdsc.c          | 11 ++++++-----
>  5 files changed, 12 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c
> b/drivers/gpu/drm/i915/display/icl_dsi.c
> index 9230792960f29..90076839e7152 100644
> --- a/drivers/gpu/drm/i915/display/icl_dsi.c
> +++ b/drivers/gpu/drm/i915/display/icl_dsi.c
> @@ -1626,9 +1626,9 @@ static int gen11_dsi_dsc_compute_config(struct
> intel_encoder *encoder,
>  
>  	/* FIXME: split only when necessary */
>  	if (crtc_state->dsc.slice_count > 1)
> -		crtc_state->dsc.num_streams = 2;
> +		crtc_state->dsc.slice_config.streams_per_pipe = 2;
>  	else
> -		crtc_state->dsc.num_streams = 1;
> +		crtc_state->dsc.slice_config.streams_per_pipe = 1;
>  
>  	/* FIXME: initialize from VBT */
>  	vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
> diff --git a/drivers/gpu/drm/i915/display/intel_display.c
> b/drivers/gpu/drm/i915/display/intel_display.c
> index 04f5c488f3998..aef6cfa7bde8e 100644
> --- a/drivers/gpu/drm/i915/display/intel_display.c
> +++ b/drivers/gpu/drm/i915/display/intel_display.c
> @@ -5450,7 +5450,7 @@ intel_pipe_config_compare(const struct
> intel_crtc_state *current_config,
>  	PIPE_CONF_CHECK_I(dsc.config.nsl_bpg_offset);
>  
>  	PIPE_CONF_CHECK_BOOL(dsc.compression_enable);
> -	PIPE_CONF_CHECK_I(dsc.num_streams);
> +	PIPE_CONF_CHECK_I(dsc.slice_config.streams_per_pipe);
>  	PIPE_CONF_CHECK_I(dsc.compressed_bpp_x16);
>  
>  	PIPE_CONF_CHECK_BOOL(splitter.enable);
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h
> b/drivers/gpu/drm/i915/display/intel_display_types.h
> index a3de93cdcbde0..574fc7ff33c97 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -1305,7 +1305,6 @@ struct intel_crtc_state {
>  		/* Only used for state computation, not read out
> from the HW. */
>  		bool compression_enabled_on_link;
>  		bool compression_enable;
> -		int num_streams;
>  		struct intel_dsc_slice_config {
>  			int pipes_per_line;
>  			int streams_per_pipe;
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index de59b93388f41..03266511841e2 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2417,11 +2417,11 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  	 */
>  	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
>  	    pipe_config->dsc.slice_count == 12)
> -		pipe_config->dsc.num_streams = 3;
> +		pipe_config->dsc.slice_config.streams_per_pipe = 3;
>  	else if (pipe_config->joiner_pipes || pipe_config-
> >dsc.slice_count > 1)
> -		pipe_config->dsc.num_streams = 2;
> +		pipe_config->dsc.slice_config.streams_per_pipe = 2;
>  	else
> -		pipe_config->dsc.num_streams = 1;
> +		pipe_config->dsc.slice_config.streams_per_pipe = 1;
>  
>  	ret = intel_dp_dsc_compute_params(connector, pipe_config);
>  	if (ret < 0) {
> diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c
> b/drivers/gpu/drm/i915/display/intel_vdsc.c
> index 8aa480e3d1c9d..2b27671f97b32 100644
> --- a/drivers/gpu/drm/i915/display/intel_vdsc.c
> +++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
> @@ -421,7 +421,7 @@ intel_dsc_power_domain(struct intel_crtc *crtc,
> enum transcoder cpu_transcoder)
>  
>  static int intel_dsc_get_vdsc_per_pipe(const struct intel_crtc_state
> *crtc_state)
>  {
> -	return crtc_state->dsc.num_streams;
> +	return crtc_state->dsc.slice_config.streams_per_pipe;
>  }
>  
>  int intel_dsc_get_num_vdsc_instances(const struct intel_crtc_state
> *crtc_state)
> @@ -1023,12 +1023,13 @@ void intel_dsc_get_config(struct
> intel_crtc_state *crtc_state)
>  	if (!crtc_state->dsc.compression_enable)
>  		goto out;
>  
> +	/* TODO: Read out
> slice_config.pipes_per_line/slices_per_stream as well */
>  	if (dss_ctl1 & JOINER_ENABLE && dss_ctl2 & (VDSC2_ENABLE |
> SMALL_JOINER_CONFIG_3_ENGINES))
> -		crtc_state->dsc.num_streams = 3;
> +		crtc_state->dsc.slice_config.streams_per_pipe = 3;
>  	else if (dss_ctl1 & JOINER_ENABLE && dss_ctl2 &
> VDSC1_ENABLE)
> -		crtc_state->dsc.num_streams = 2;
> +		crtc_state->dsc.slice_config.streams_per_pipe = 2;
>  	else
> -		crtc_state->dsc.num_streams = 1;
> +		crtc_state->dsc.slice_config.streams_per_pipe = 1;
>  
>  	intel_dsc_get_pps_config(crtc_state);
>  out:
> @@ -1042,7 +1043,7 @@ static void intel_vdsc_dump_state(struct
> drm_printer *p, int indent,
>  			  "dsc-dss: compressed-bpp:" FXP_Q4_FMT ",
> slice-count: %d, num_streams: %d\n",
>  			  FXP_Q4_ARGS(crtc_state-
> >dsc.compressed_bpp_x16),
>  			  crtc_state->dsc.slice_count,
> -			  crtc_state->dsc.num_streams);
> +			  crtc_state-
> >dsc.slice_config.streams_per_pipe);
>  }
>  
>  void intel_vdsc_state_dump(struct drm_printer *p, int indent,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc()
  2025-11-27 17:50 ` [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc() Imre Deak
@ 2025-12-09  8:47   ` Hogander, Jouni
  2025-12-09 10:38     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09  8:47 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Move the initialization of the DSI DSC streams-per-pipe value to
> fill_dsc() next to where the corresponding (per-line) slice_count
> value
> is initialized. This allows converting the initialization to use the
> detailed slice configuration state in follow-up changes.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/icl_dsi.c    | 6 ------
>  drivers/gpu/drm/i915/display/intel_bios.c | 5 +++++
>  2 files changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c
> b/drivers/gpu/drm/i915/display/icl_dsi.c
> index 90076839e7152..9aba3d813daae 100644
> --- a/drivers/gpu/drm/i915/display/icl_dsi.c
> +++ b/drivers/gpu/drm/i915/display/icl_dsi.c
> @@ -1624,12 +1624,6 @@ static int gen11_dsi_dsc_compute_config(struct
> intel_encoder *encoder,
>  	if (crtc_state->pipe_bpp < 8 * 3)
>  		return -EINVAL;
>  
> -	/* FIXME: split only when necessary */
> -	if (crtc_state->dsc.slice_count > 1)
> -		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> -	else
> -		crtc_state->dsc.slice_config.streams_per_pipe = 1;
> -
>  	/* FIXME: initialize from VBT */
>  	vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_bios.c
> b/drivers/gpu/drm/i915/display/intel_bios.c
> index a639c5eb32459..e69fac4f5bdfe 100644
> --- a/drivers/gpu/drm/i915/display/intel_bios.c
> +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> @@ -3516,10 +3516,14 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  	 * throughput etc. into account.
>  	 *
>  	 * Also, per spec DSI supports 1, 2, 3 or 4 horizontal
> slices.
> +	 *
> +	 * FIXME: split only when necessary
>  	 */
>  	if (dsc->slices_per_line & BIT(2)) {
> +		crtc_state->dsc.slice_config.streams_per_pipe = 2;
>  		crtc_state->dsc.slice_count = 4;
>  	} else if (dsc->slices_per_line & BIT(1)) {
> +		crtc_state->dsc.slice_config.streams_per_pipe = 2;

fill_dsc is called by intel_bios_get_dsc_params. Is streams_per_pipe
really bios parameter? I see slices_per_line is in VBT.
Streams_per_pipe and existing slice_count are decided based on that. Is
that right place to make that decisions or should we leave that
decision to caller of intel_bios_get_dsc_params?

BR,

Jouni Högander

>  		crtc_state->dsc.slice_count = 2;
>  	} else {
>  		/* FIXME */
> @@ -3527,6 +3531,7 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  			drm_dbg_kms(display->drm,
>  				    "VBT: Unsupported DSC slice
> count for DSI\n");
>  
> +		crtc_state->dsc.slice_config.streams_per_pipe = 1;
>  		crtc_state->dsc.slice_count = 1;
>  	}
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 02/50] drm/dp: Add drm_dp_dsc_sink_slice_count_mask()
  2025-11-27 17:49 ` [PATCH 02/50] drm/dp: Add drm_dp_dsc_sink_slice_count_mask() Imre Deak
@ 2025-12-09  8:48   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-09  8:48 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe; +Cc: dri-devel

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> A DSC sink supporting DSC slice count N, not necessarily supports slice
> counts less than N. Hence the driver should check the sink's support for
> a particular slice count before using that slice count. Add the helper
> functions required for this.
> 
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/display/drm_dp_helper.c | 82 +++++++++++++++++--------
>  include/drm/display/drm_dp_helper.h     |  3 +
>  2 files changed, 61 insertions(+), 24 deletions(-)
> 
> diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
> index 19564c1afba6c..a697cc227e289 100644
> --- a/drivers/gpu/drm/display/drm_dp_helper.c
> +++ b/drivers/gpu/drm/display/drm_dp_helper.c
> @@ -2705,56 +2705,90 @@ u8 drm_dp_dsc_sink_bpp_incr(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
>  EXPORT_SYMBOL(drm_dp_dsc_sink_bpp_incr);
>  
>  /**
> - * drm_dp_dsc_sink_max_slice_count() - Get the max slice count
> - * supported by the DSC sink.
> - * @dsc_dpcd: DSC capabilities from DPCD
> - * @is_edp: true if its eDP, false for DP
> + * drm_dp_dsc_slice_count_to_mask() - Convert a slice count to a slice count mask
> + * @slice_count: slice count
>   *
> - * Read the slice capabilities DPCD register from DSC sink to get
> - * the maximum slice count supported. This is used to populate
> - * the DSC parameters in the &struct drm_dsc_config by the driver.
> - * Driver creates an infoframe using these parameters to populate
> - * &struct drm_dsc_pps_infoframe. These are sent to the sink using DSC
> - * infoframe using the helper function drm_dsc_pps_infoframe_pack()
> + * Convert @slice_count to a slice count mask.
> + *
> + * Returns the slice count mask.
> + */
> +u32 drm_dp_dsc_slice_count_to_mask(int slice_count)
> +{
> +	return BIT(slice_count - 1);
> +}
> +EXPORT_SYMBOL(drm_dp_dsc_slice_count_to_mask);
> +
> +/**
> + * drm_dp_dsc_sink_slice_count_mask() - Get the mask of valid DSC sink slice counts
> + * @dsc_dpcd: the sink's DSC DPCD capabilities
> + * @is_edp: %true for an eDP sink
> + *
> + * Get the mask of supported slice counts from the sink's DSC DPCD register.
>   *
>   * Returns:
> - * Maximum slice count supported by DSC sink or 0 its invalid
> + * Mask of slice counts supported by the DSC sink:
> + * - > 0: bit#0,1,3,5..,23 set if the sink supports 1,2,4,6..,24 slices
> + * - 0:   if the sink doesn't support any slices
>   */
> -u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> -				   bool is_edp)
> +u32 drm_dp_dsc_sink_slice_count_mask(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> +				     bool is_edp)
>  {
>  	u8 slice_cap1 = dsc_dpcd[DP_DSC_SLICE_CAP_1 - DP_DSC_SUPPORT];
> +	u32 mask = 0;
>  
>  	if (!is_edp) {
>  		/* For DP, use values from DSC_SLICE_CAP_1 and DSC_SLICE_CAP2 */
>  		u8 slice_cap2 = dsc_dpcd[DP_DSC_SLICE_CAP_2 - DP_DSC_SUPPORT];
>  
>  		if (slice_cap2 & DP_DSC_24_PER_DP_DSC_SINK)
> -			return 24;
> +			mask |= drm_dp_dsc_slice_count_to_mask(24);
>  		if (slice_cap2 & DP_DSC_20_PER_DP_DSC_SINK)
> -			return 20;
> +			mask |= drm_dp_dsc_slice_count_to_mask(20);
>  		if (slice_cap2 & DP_DSC_16_PER_DP_DSC_SINK)
> -			return 16;
> +			mask |= drm_dp_dsc_slice_count_to_mask(16);
>  	}
>  
>  	/* DP, eDP v1.5+ */
>  	if (slice_cap1 & DP_DSC_12_PER_DP_DSC_SINK)
> -		return 12;
> +		mask |= drm_dp_dsc_slice_count_to_mask(12);
>  	if (slice_cap1 & DP_DSC_10_PER_DP_DSC_SINK)
> -		return 10;
> +		mask |= drm_dp_dsc_slice_count_to_mask(10);
>  	if (slice_cap1 & DP_DSC_8_PER_DP_DSC_SINK)
> -		return 8;
> +		mask |= drm_dp_dsc_slice_count_to_mask(8);
>  	if (slice_cap1 & DP_DSC_6_PER_DP_DSC_SINK)
> -		return 6;
> +		mask |= drm_dp_dsc_slice_count_to_mask(6);
>  	/* DP, eDP v1.4+ */
>  	if (slice_cap1 & DP_DSC_4_PER_DP_DSC_SINK)
> -		return 4;
> +		mask |= drm_dp_dsc_slice_count_to_mask(4);
>  	if (slice_cap1 & DP_DSC_2_PER_DP_DSC_SINK)
> -		return 2;
> +		mask |= drm_dp_dsc_slice_count_to_mask(2);
>  	if (slice_cap1 & DP_DSC_1_PER_DP_DSC_SINK)
> -		return 1;
> +		mask |= drm_dp_dsc_slice_count_to_mask(1);
>  
> -	return 0;
> +	return mask;
> +}
> +EXPORT_SYMBOL(drm_dp_dsc_sink_slice_count_mask);
> +
> +/**
> + * drm_dp_dsc_sink_max_slice_count() - Get the max slice count
> + * supported by the DSC sink.
> + * @dsc_dpcd: DSC capabilities from DPCD
> + * @is_edp: true if its eDP, false for DP
> + *
> + * Read the slice capabilities DPCD register from DSC sink to get
> + * the maximum slice count supported. This is used to populate
> + * the DSC parameters in the &struct drm_dsc_config by the driver.
> + * Driver creates an infoframe using these parameters to populate
> + * &struct drm_dsc_pps_infoframe. These are sent to the sink using DSC
> + * infoframe using the helper function drm_dsc_pps_infoframe_pack()
> + *
> + * Returns:
> + * Maximum slice count supported by DSC sink or 0 its invalid
> + */
> +u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> +				   bool is_edp)
> +{
> +	return fls(drm_dp_dsc_sink_slice_count_mask(dsc_dpcd, is_edp));
>  }
>  EXPORT_SYMBOL(drm_dp_dsc_sink_max_slice_count);
>  
> diff --git a/include/drm/display/drm_dp_helper.h b/include/drm/display/drm_dp_helper.h
> index df2f24b950e4c..85e868238e287 100644
> --- a/include/drm/display/drm_dp_helper.h
> +++ b/include/drm/display/drm_dp_helper.h
> @@ -206,6 +206,9 @@ drm_dp_is_branch(const u8 dpcd[DP_RECEIVER_CAP_SIZE])
>  
>  /* DP/eDP DSC support */
>  u8 drm_dp_dsc_sink_bpp_incr(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]);
> +u32 drm_dp_dsc_slice_count_to_mask(int slice_count);
> +u32 drm_dp_dsc_sink_slice_count_mask(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
> +				     bool is_edp);
>  u8 drm_dp_dsc_sink_max_slice_count(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE],
>  				   bool is_edp);
>  u8 drm_dp_dsc_sink_line_buf_depth(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE]);

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check
  2025-11-27 17:49 ` [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check Imre Deak
@ 2025-12-09  8:51   ` Luca Coelho
  2025-12-09  9:53     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Luca Coelho @ 2025-12-09  8:51 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe; +Cc: dri-devel

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> A DSC sink supporting DSC slice count N, not necessarily supports slice
> counts less than N. Hence the driver should check the sink's support for
> a particular slice count before using that slice count, fix
> intel_dp_dsc_get_slice_count() accordingly.
> 
> Cc: dri-devel@lists.freedesktop.org
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 0ec82fcbcf48e..6d232c15a0b5a 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1013,6 +1013,8 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
>  				int num_joined_pipes)
>  {
>  	struct intel_display *display = to_intel_display(connector);
> +	u32 sink_slice_count_mask =
> +		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
>  	u8 min_slice_count, i;
>  	int max_slice_width;
>  	int tp_rgb_yuv444;
> @@ -1084,9 +1086,9 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
>  		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
>  			continue;
>  
> -		if (test_slice_count >
> -		    drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd, false))
> -			break;
> +		if (!(drm_dp_dsc_slice_count_to_mask(test_slice_count) &
> +		      sink_slice_count_mask))
> +			continue;
>  
>  		 /*
>  		  * Bigjoiner needs small joiner to be enabled.
> @@ -1103,8 +1105,14 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
>  			return test_slice_count;
>  	}
>  
> -	drm_dbg_kms(display->drm, "Unsupported Slice Count %d\n",
> -		    min_slice_count);
> +	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
> +	sink_slice_count_mask <<= 1;
> +	drm_dbg_kms(display->drm,
> +		    "[CONNECTOR:%d:%s] Unsupported slice count (min: %d, sink supported: %*pbl)\n",
> +		    connector->base.base.id, connector->base.name,
> +		    min_slice_count,
> +		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
> +
>  	return 0;
>  }
> 

I think this patch could be squashed into the previous one.  IMHO it
makes it a bit easier to see how those functions defined in the
previous patch would be used.

But nevertheless:

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 04/50] drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp()
  2025-11-27 17:49 ` [PATCH 04/50] drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp() Imre Deak
@ 2025-12-09  9:10   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-09  9:10 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Convert intel_dp_output_bpp() and intel_dp_mode_min_output_bpp() to
> return an x16 fixed point bpp value, as this value will be always the
> link BPP (either compressed or uncompressed) tracked in the same x16
> fixed point format.
> 
> While at it rename

This line break can be avoided.

> intel_dp_output_bpp() to intel_dp_output_format_link_bpp_x16() and
> intel_dp_mode_min_output_bpp() to intel_dp_mode_min_link_bpp_x16() to
> better reflect that these functions return an x16 link BPP value
> specific to a particular output format or mode.
> 
> Also rename intel_dp_output_bpp()'s bpp parameter to pipe_bpp, to
> clarify which kind of (pipe vs. link) BPP the parameter is.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 41 +++++++++++----------
>  drivers/gpu/drm/i915/display/intel_dp.h     |  3 +-
>  drivers/gpu/drm/i915/display/intel_dp_mst.c |  4 +-
>  3 files changed, 26 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 6d232c15a0b5a..beda340d05923 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1234,7 +1234,7 @@ int intel_dp_min_bpp(enum intel_output_format output_format)
>  		return 8 * 3;
>  }
>  
> -int intel_dp_output_bpp(enum intel_output_format output_format, int bpp)
> +int intel_dp_output_format_link_bpp_x16(enum intel_output_format output_format, int pipe_bpp)
>  {
>  	/*
>  	 * bpp value was assumed to RGB format. And YCbCr 4:2:0 output
> @@ -1242,9 +1242,9 @@ int intel_dp_output_bpp(enum intel_output_format output_format, int bpp)
>  	 * of bytes of RGB pixel.
>  	 */
>  	if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
> -		bpp /= 2;
> +		pipe_bpp /= 2;
>  
> -	return bpp;
> +	return fxp_q4_from_int(pipe_bpp);
>  }
>  
>  static enum intel_output_format
> @@ -1260,8 +1260,8 @@ intel_dp_sink_format(struct intel_connector *connector,
>  }
>  
>  static int
> -intel_dp_mode_min_output_bpp(struct intel_connector *connector,
> -			     const struct drm_display_mode *mode)
> +intel_dp_mode_min_link_bpp_x16(struct intel_connector *connector,
> +			       const struct drm_display_mode *mode)
>  {
>  	enum intel_output_format output_format, sink_format;
>  
> @@ -1269,7 +1269,8 @@ intel_dp_mode_min_output_bpp(struct intel_connector *connector,
>  
>  	output_format = intel_dp_output_format(connector, sink_format);
>  
> -	return intel_dp_output_bpp(output_format, intel_dp_min_bpp(output_format));
> +	return intel_dp_output_format_link_bpp_x16(output_format,
> +						   intel_dp_min_bpp(output_format));
>  }
>  
>  static bool intel_dp_hdisplay_bad(struct intel_display *display,
> @@ -1341,11 +1342,11 @@ intel_dp_mode_valid_downstream(struct intel_connector *connector,
>  
>  	/* If PCON supports FRL MODE, check FRL bandwidth constraints */
>  	if (intel_dp->dfp.pcon_max_frl_bw) {
> +		int link_bpp_x16 = intel_dp_mode_min_link_bpp_x16(connector, mode);
>  		int target_bw;
>  		int max_frl_bw;
> -		int bpp = intel_dp_mode_min_output_bpp(connector, mode);
>  
> -		target_bw = bpp * target_clock;
> +		target_bw = fxp_q4_to_int_roundup(link_bpp_x16) * target_clock;
>  
>  		max_frl_bw = intel_dp->dfp.pcon_max_frl_bw;
>  
> @@ -1460,6 +1461,7 @@ intel_dp_mode_valid(struct drm_connector *_connector,
>  	enum drm_mode_status status;
>  	bool dsc = false;
>  	int num_joined_pipes;
> +	int link_bpp_x16;
>  
>  	status = intel_cpu_transcoder_mode_valid(display, mode);
>  	if (status != MODE_OK)
> @@ -1502,8 +1504,8 @@ intel_dp_mode_valid(struct drm_connector *_connector,
>  
>  	max_rate = intel_dp_max_link_data_rate(intel_dp, max_link_clock, max_lanes);
>  
> -	mode_rate = intel_dp_link_required(target_clock,
> -					   intel_dp_mode_min_output_bpp(connector, mode));
> +	link_bpp_x16 = intel_dp_mode_min_link_bpp_x16(connector, mode);
> +	mode_rate = intel_dp_link_required(target_clock, fxp_q4_to_int_roundup(link_bpp_x16));
>  
>  	if (intel_dp_has_dsc(connector)) {
>  		int pipe_bpp;
> @@ -1815,9 +1817,10 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  	for (bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
>  	     bpp >= fxp_q4_to_int(limits->link.min_bpp_x16);
>  	     bpp -= 2 * 3) {
> -		int link_bpp = intel_dp_output_bpp(pipe_config->output_format, bpp);
> +		int link_bpp_x16 =
> +			intel_dp_output_format_link_bpp_x16(pipe_config->output_format, bpp);
>  
> -		mode_rate = intel_dp_link_required(clock, link_bpp);
> +		mode_rate = intel_dp_link_required(clock, fxp_q4_to_int_roundup(link_bpp_x16));
>  
>  		for (i = 0; i < intel_dp->num_common_rates; i++) {
>  			link_rate = intel_dp_common_rate(intel_dp, i);
> @@ -2201,10 +2204,10 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
>  	struct intel_display *display = to_intel_display(intel_dp);
>  	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
>  	const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
> -	int output_bpp;
>  	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
>  	int dsc_joiner_max_bpp;
>  	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
> +	int link_bpp_x16;
>  	int bpp_x16;
>  	int ret;
>  
> @@ -2216,8 +2219,8 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
>  	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
>  
>  	/* Compressed BPP should be less than the Input DSC bpp */
> -	output_bpp = intel_dp_output_bpp(pipe_config->output_format, pipe_bpp);
> -	max_bpp_x16 = min(max_bpp_x16, fxp_q4_from_int(output_bpp) - bpp_step_x16);
> +	link_bpp_x16 = intel_dp_output_format_link_bpp_x16(pipe_config->output_format, pipe_bpp);
> +	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
>  
>  	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
>  	min_bpp_x16 = round_up(limits->link.min_bpp_x16, bpp_step_x16);
> @@ -3267,8 +3270,8 @@ int intel_dp_compute_min_hblank(struct intel_crtc_state *crtc_state,
>  	if (crtc_state->dsc.compression_enable)
>  		link_bpp_x16 = crtc_state->dsc.compressed_bpp_x16;
>  	else
> -		link_bpp_x16 = fxp_q4_from_int(intel_dp_output_bpp(crtc_state->output_format,
> -								   crtc_state->pipe_bpp));
> +		link_bpp_x16 = intel_dp_output_format_link_bpp_x16(crtc_state->output_format,
> +								   crtc_state->pipe_bpp);
>  
>  	/* Calculate min Hblank Link Layer Symbol Cycle Count for 8b/10b MST & 128b/132b */
>  	hactive_sym_cycles = drm_dp_link_symbol_cycles(max_lane_count,
> @@ -3378,8 +3381,8 @@ intel_dp_compute_config(struct intel_encoder *encoder,
>  	if (pipe_config->dsc.compression_enable)
>  		link_bpp_x16 = pipe_config->dsc.compressed_bpp_x16;
>  	else
> -		link_bpp_x16 = fxp_q4_from_int(intel_dp_output_bpp(pipe_config->output_format,
> -								   pipe_config->pipe_bpp));
> +		link_bpp_x16 = intel_dp_output_format_link_bpp_x16(pipe_config->output_format,
> +								   pipe_config->pipe_bpp);
>  
>  	if (intel_dp->mso_link_count) {
>  		int n = intel_dp->mso_link_count;
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
> index 200a8b267f647..97e361458f760 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -193,7 +193,8 @@ void intel_dp_pcon_dsc_configure(struct intel_dp *intel_dp,
>  
>  void intel_dp_invalidate_source_oui(struct intel_dp *intel_dp);
>  void intel_dp_wait_source_oui(struct intel_dp *intel_dp);
> -int intel_dp_output_bpp(enum intel_output_format output_format, int bpp);
> +int intel_dp_output_format_link_bpp_x16(enum intel_output_format output_format,
> +					int pipe_bpp);

I'm not the biggest fan of very long function names like this, but I
can't come up with anything better...


>  
>  bool intel_dp_compute_config_limits(struct intel_dp *intel_dp,
>  				    struct drm_connector_state *conn_state,
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 4c0b943fe86f1..1a4784f0cd6bd 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -344,8 +344,8 @@ int intel_dp_mtp_tu_compute_config(struct intel_dp *intel_dp,
>  		}
>  
>  		link_bpp_x16 = dsc ? bpp_x16 :
> -			fxp_q4_from_int(intel_dp_output_bpp(crtc_state->output_format,
> -							    fxp_q4_to_int(bpp_x16)));
> +			intel_dp_output_format_link_bpp_x16(crtc_state->output_format,
> +							    fxp_q4_to_int(bpp_x16));
>  
>  		local_bw_overhead = intel_dp_mst_bw_overhead(crtc_state,
>  							     false, dsc_slice_count, link_bpp_x16);

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check
  2025-12-09  8:51   ` Luca Coelho
@ 2025-12-09  9:53     ` Imre Deak
  2025-12-09 11:14       ` Luca Coelho
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-09  9:53 UTC (permalink / raw)
  To: Luca Coelho; +Cc: intel-gfx, intel-xe, dri-devel

On Tue, Dec 09, 2025 at 10:51:10AM +0200, Luca Coelho wrote:
> On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > A DSC sink supporting DSC slice count N, not necessarily supports slice
> > counts less than N. Hence the driver should check the sink's support for
> > a particular slice count before using that slice count, fix
> > intel_dp_dsc_get_slice_count() accordingly.
> > 
> > Cc: dri-devel@lists.freedesktop.org
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 18 +++++++++++++-----
> >  1 file changed, 13 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 0ec82fcbcf48e..6d232c15a0b5a 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -1013,6 +1013,8 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> >  				int num_joined_pipes)
> >  {
> >  	struct intel_display *display = to_intel_display(connector);
> > +	u32 sink_slice_count_mask =
> > +		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
> >  	u8 min_slice_count, i;
> >  	int max_slice_width;
> >  	int tp_rgb_yuv444;
> > @@ -1084,9 +1086,9 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> >  		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
> >  			continue;
> >  
> > -		if (test_slice_count >
> > -		    drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd, false))
> > -			break;
> > +		if (!(drm_dp_dsc_slice_count_to_mask(test_slice_count) &
> > +		      sink_slice_count_mask))
> > +			continue;
> >  
> >  		 /*
> >  		  * Bigjoiner needs small joiner to be enabled.
> > @@ -1103,8 +1105,14 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> >  			return test_slice_count;
> >  	}
> >  
> > -	drm_dbg_kms(display->drm, "Unsupported Slice Count %d\n",
> > -		    min_slice_count);
> > +	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
> > +	sink_slice_count_mask <<= 1;
> > +	drm_dbg_kms(display->drm,
> > +		    "[CONNECTOR:%d:%s] Unsupported slice count (min: %d, sink supported: %*pbl)\n",
> > +		    connector->base.base.id, connector->base.name,
> > +		    min_slice_count,
> > +		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
> > +
> >  	return 0;
> >  }
> > 
> 
> I think this patch could be squashed into the previous one.  IMHO it
> makes it a bit easier to see how those functions defined in the
> previous patch would be used.

The practice I follow is to keep the DRM core and driver changes in
separate patches. At least one reason for that is that the DRM core
patches may need to be applied to the DRM core trees separately.

> But nevertheless:
> 
> Reviewed-by: Luca Coelho <luciano.coelho@intel.com>
> 
> --
> Cheers,
> Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc()
  2025-12-09  8:47   ` Hogander, Jouni
@ 2025-12-09 10:38     ` Imre Deak
  2025-12-09 11:37       ` Hogander, Jouni
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-09 10:38 UTC (permalink / raw)
  To: Jouni Hogander
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Tue, Dec 09, 2025 at 10:47:35AM +0200, Jouni Hogander wrote:
> On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> > Move the initialization of the DSI DSC streams-per-pipe value to
> > fill_dsc() next to where the corresponding (per-line) slice_count
> > value
> > is initialized. This allows converting the initialization to use the
> > detailed slice configuration state in follow-up changes.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/icl_dsi.c    | 6 ------
> >  drivers/gpu/drm/i915/display/intel_bios.c | 5 +++++
> >  2 files changed, 5 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c
> > b/drivers/gpu/drm/i915/display/icl_dsi.c
> > index 90076839e7152..9aba3d813daae 100644
> > --- a/drivers/gpu/drm/i915/display/icl_dsi.c
> > +++ b/drivers/gpu/drm/i915/display/icl_dsi.c
> > @@ -1624,12 +1624,6 @@ static int gen11_dsi_dsc_compute_config(struct
> > intel_encoder *encoder,
> >  	if (crtc_state->pipe_bpp < 8 * 3)
> >  		return -EINVAL;
> >  
> > -	/* FIXME: split only when necessary */
> > -	if (crtc_state->dsc.slice_count > 1)
> > -		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> > -	else
> > -		crtc_state->dsc.slice_config.streams_per_pipe = 1;
> > -
> >  	/* FIXME: initialize from VBT */
> >  	vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
> >  
> > diff --git a/drivers/gpu/drm/i915/display/intel_bios.c
> > b/drivers/gpu/drm/i915/display/intel_bios.c
> > index a639c5eb32459..e69fac4f5bdfe 100644
> > --- a/drivers/gpu/drm/i915/display/intel_bios.c
> > +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> > @@ -3516,10 +3516,14 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
> >  	 * throughput etc. into account.
> >  	 *
> >  	 * Also, per spec DSI supports 1, 2, 3 or 4 horizontal slices.
> > +	 *
> > +	 * FIXME: split only when necessary
> >  	 */
> >  	if (dsc->slices_per_line & BIT(2)) {
> > +		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> >  		crtc_state->dsc.slice_count = 4;
> >  	} else if (dsc->slices_per_line & BIT(1)) {
> > +		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> 
> fill_dsc is called by intel_bios_get_dsc_params. Is streams_per_pipe
> really bios parameter? I see slices_per_line is in VBT.
> Streams_per_pipe and existing slice_count are decided based on that.

The slices_per_line computed in fill_dsc() at the moment
(crtc_state->dsc.slice_count) is not exactly what is in VBT. VBT
indicates what slices_per_line counts the sink supports, not what the
selected slices_per_line count should be (which would be a single
integer parameter in VBT, not a mask).

> Is that right place to make that decisions or should we leave that
> decision to caller of intel_bios_get_dsc_params?

I think the computation of the slices_per_line value (for which the
sink's slice_per_line capability mask is only one criteria) should be at
the same spot where the closely related pipes_per_line, streams_per_pipe
and slices_per_stream are computed as well. In fact at the end of the
patchset only these latter 3 params are computed and the slices_per_line
value is derived from these using a helper function.

I agree with you that fill_dsc() should not do the actual state
computation (like it does atm selecting slices_per_line aka
dsc.slice_count), rather this should be done by the DSI encoder state
computation in gen11_dsi_dsc_compute_config(), fill_dsc() only returning
a mask of the slices_per_line counts supported by the sink. Would you be
ok to do this as a follow-up?

> BR,
> 
> Jouni Högander
> 
> >  		crtc_state->dsc.slice_count = 2;
> >  	} else {
> >  		/* FIXME */
> > @@ -3527,6 +3531,7 @@ static void fill_dsc(struct intel_crtc_state *crtc_state,
> >  			drm_dbg_kms(display->drm,
> >  				    "VBT: Unsupported DSC slice count for DSI\n");
> >  
> > +		crtc_state->dsc.slice_config.streams_per_pipe = 1;
> >  		crtc_state->dsc.slice_count = 1;
> >  	}
> >  

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check
  2025-12-09  9:53     ` Imre Deak
@ 2025-12-09 11:14       ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-09 11:14 UTC (permalink / raw)
  To: imre.deak; +Cc: intel-gfx, intel-xe, dri-devel

On Tue, 2025-12-09 at 11:53 +0200, Imre Deak wrote:
> On Tue, Dec 09, 2025 at 10:51:10AM +0200, Luca Coelho wrote:
> > On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > > A DSC sink supporting DSC slice count N, not necessarily supports slice
> > > counts less than N. Hence the driver should check the sink's support for
> > > a particular slice count before using that slice count, fix
> > > intel_dp_dsc_get_slice_count() accordingly.
> > > 
> > > Cc: dri-devel@lists.freedesktop.org
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_dp.c | 18 +++++++++++++-----
> > >  1 file changed, 13 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > > index 0ec82fcbcf48e..6d232c15a0b5a 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > @@ -1013,6 +1013,8 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> > >  				int num_joined_pipes)
> > >  {
> > >  	struct intel_display *display = to_intel_display(connector);
> > > +	u32 sink_slice_count_mask =
> > > +		drm_dp_dsc_sink_slice_count_mask(connector->dp.dsc_dpcd, false);
> > >  	u8 min_slice_count, i;
> > >  	int max_slice_width;
> > >  	int tp_rgb_yuv444;
> > > @@ -1084,9 +1086,9 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> > >  		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes != 4))
> > >  			continue;
> > >  
> > > -		if (test_slice_count >
> > > -		    drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd, false))
> > > -			break;
> > > +		if (!(drm_dp_dsc_slice_count_to_mask(test_slice_count) &
> > > +		      sink_slice_count_mask))
> > > +			continue;
> > >  
> > >  		 /*
> > >  		  * Bigjoiner needs small joiner to be enabled.
> > > @@ -1103,8 +1105,14 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> > >  			return test_slice_count;
> > >  	}
> > >  
> > > -	drm_dbg_kms(display->drm, "Unsupported Slice Count %d\n",
> > > -		    min_slice_count);
> > > +	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
> > > +	sink_slice_count_mask <<= 1;
> > > +	drm_dbg_kms(display->drm,
> > > +		    "[CONNECTOR:%d:%s] Unsupported slice count (min: %d, sink supported: %*pbl)\n",
> > > +		    connector->base.base.id, connector->base.name,
> > > +		    min_slice_count,
> > > +		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
> > > +
> > >  	return 0;
> > >  }
> > > 
> > 
> > I think this patch could be squashed into the previous one.  IMHO it
> > makes it a bit easier to see how those functions defined in the
> > previous patch would be used.
> 
> The practice I follow is to keep the DRM core and driver changes in
> separate patches. At least one reason for that is that the DRM core
> patches may need to be applied to the DRM core trees separately.

Oh, of course, makes 100% sense.  I overlooked this point.

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc()
  2025-12-09 10:38     ` Imre Deak
@ 2025-12-09 11:37       ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 11:37 UTC (permalink / raw)
  To: Deak, Imre
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Tue, 2025-12-09 at 12:38 +0200, Imre Deak wrote:
> On Tue, Dec 09, 2025 at 10:47:35AM +0200, Jouni Hogander wrote:
> > On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> > > Move the initialization of the DSI DSC streams-per-pipe value to
> > > fill_dsc() next to where the corresponding (per-line) slice_count
> > > value
> > > is initialized. This allows converting the initialization to use
> > > the
> > > detailed slice configuration state in follow-up changes.
> > > 
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/icl_dsi.c    | 6 ------
> > >  drivers/gpu/drm/i915/display/intel_bios.c | 5 +++++
> > >  2 files changed, 5 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c
> > > b/drivers/gpu/drm/i915/display/icl_dsi.c
> > > index 90076839e7152..9aba3d813daae 100644
> > > --- a/drivers/gpu/drm/i915/display/icl_dsi.c
> > > +++ b/drivers/gpu/drm/i915/display/icl_dsi.c
> > > @@ -1624,12 +1624,6 @@ static int
> > > gen11_dsi_dsc_compute_config(struct
> > > intel_encoder *encoder,
> > >  	if (crtc_state->pipe_bpp < 8 * 3)
> > >  		return -EINVAL;
> > >  
> > > -	/* FIXME: split only when necessary */
> > > -	if (crtc_state->dsc.slice_count > 1)
> > > -		crtc_state->dsc.slice_config.streams_per_pipe =
> > > 2;
> > > -	else
> > > -		crtc_state->dsc.slice_config.streams_per_pipe =
> > > 1;
> > > -
> > >  	/* FIXME: initialize from VBT */
> > >  	vdsc_cfg->rc_model_size = DSC_RC_MODEL_SIZE_CONST;
> > >  
> > > diff --git a/drivers/gpu/drm/i915/display/intel_bios.c
> > > b/drivers/gpu/drm/i915/display/intel_bios.c
> > > index a639c5eb32459..e69fac4f5bdfe 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_bios.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> > > @@ -3516,10 +3516,14 @@ static void fill_dsc(struct
> > > intel_crtc_state *crtc_state,
> > >  	 * throughput etc. into account.
> > >  	 *
> > >  	 * Also, per spec DSI supports 1, 2, 3 or 4 horizontal
> > > slices.
> > > +	 *
> > > +	 * FIXME: split only when necessary
> > >  	 */
> > >  	if (dsc->slices_per_line & BIT(2)) {
> > > +		crtc_state->dsc.slice_config.streams_per_pipe =
> > > 2;
> > >  		crtc_state->dsc.slice_count = 4;
> > >  	} else if (dsc->slices_per_line & BIT(1)) {
> > > +		crtc_state->dsc.slice_config.streams_per_pipe =
> > > 2;
> > 
> > fill_dsc is called by intel_bios_get_dsc_params. Is
> > streams_per_pipe
> > really bios parameter? I see slices_per_line is in VBT.
> > Streams_per_pipe and existing slice_count are decided based on
> > that.
> 
> The slices_per_line computed in fill_dsc() at the moment
> (crtc_state->dsc.slice_count) is not exactly what is in VBT. VBT
> indicates what slices_per_line counts the sink supports, not what the
> selected slices_per_line count should be (which would be a single
> integer parameter in VBT, not a mask).
> 
> > Is that right place to make that decisions or should we leave that
> > decision to caller of intel_bios_get_dsc_params?
> 
> I think the computation of the slices_per_line value (for which the
> sink's slice_per_line capability mask is only one criteria) should be
> at
> the same spot where the closely related pipes_per_line,
> streams_per_pipe
> and slices_per_stream are computed as well. In fact at the end of the
> patchset only these latter 3 params are computed and the
> slices_per_line
> value is derived from these using a helper function.
> 
> I agree with you that fill_dsc() should not do the actual state
> computation (like it does atm selecting slices_per_line aka
> dsc.slice_count), rather this should be done by the DSI encoder state
> computation in gen11_dsi_dsc_compute_config(), fill_dsc() only
> returning
> a mask of the slices_per_line counts supported by the sink. Would you
> be
> ok to do this as a follow-up?

Thank you for the explanation. I'm fine with doing it as a follow-up:

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> 
> > BR,
> > 
> > Jouni Högander
> > 
> > >  		crtc_state->dsc.slice_count = 2;
> > >  	} else {
> > >  		/* FIXME */
> > > @@ -3527,6 +3531,7 @@ static void fill_dsc(struct
> > > intel_crtc_state *crtc_state,
> > >  			drm_dbg_kms(display->drm,
> > >  				    "VBT: Unsupported DSC slice
> > > count for DSI\n");
> > >  
> > > +		crtc_state->dsc.slice_config.streams_per_pipe =
> > > 1;
> > >  		crtc_state->dsc.slice_count = 1;
> > >  	}
> > >  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 39/50] drm/i915/dsi: Track the detailed DSC slice configuration
  2025-11-27 17:50 ` [PATCH 39/50] drm/i915/dsi: Track the detailed DSC slice configuration Imre Deak
@ 2025-12-09 12:43   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 12:43 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Add tracking for the DSI DSC pipes-per-line and slices-per-stream
> value
> in the slice config state and compute the current slices-per-line
> value
> using this slice config state. The slices-per-line value used atm
> will
> be removed by a follow-up change after converting all the places
> using
> it to use the detailed slice config instead.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_bios.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_bios.c
> b/drivers/gpu/drm/i915/display/intel_bios.c
> index e69fac4f5bdfe..479c5f0158800 100644
> --- a/drivers/gpu/drm/i915/display/intel_bios.c
> +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> @@ -41,6 +41,7 @@
>  #include "intel_display_utils.h"
>  #include "intel_gmbus.h"
>  #include "intel_rom.h"
> +#include "intel_vdsc.h"
>  
>  #define _INTEL_BIOS_PRIVATE
>  #include "intel_vbt_defs.h"
> @@ -3519,12 +3520,14 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  	 *
>  	 * FIXME: split only when necessary
>  	 */
> +	crtc_state->dsc.slice_config.pipes_per_line = 1;
> +
>  	if (dsc->slices_per_line & BIT(2)) {
>  		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> -		crtc_state->dsc.slice_count = 4;
> +		crtc_state->dsc.slice_config.slices_per_stream = 2;
>  	} else if (dsc->slices_per_line & BIT(1)) {
>  		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> -		crtc_state->dsc.slice_count = 2;
> +		crtc_state->dsc.slice_config.slices_per_stream = 1;
>  	} else {
>  		/* FIXME */
>  		if (!(dsc->slices_per_line & BIT(0)))
> @@ -3532,9 +3535,11 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  				    "VBT: Unsupported DSC slice
> count for DSI\n");
>  
>  		crtc_state->dsc.slice_config.streams_per_pipe = 1;
> -		crtc_state->dsc.slice_count = 1;
> +		crtc_state->dsc.slice_config.slices_per_stream = 1;
>  	}
>  
> +	crtc_state->dsc.slice_count =
> intel_dsc_line_slice_count(&crtc_state->dsc.slice_config);
> +
>  	if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
>  	    crtc_state->dsc.slice_count != 0)
>  		drm_dbg_kms(display->drm,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 05/50] drm/i915/dp: Use a mode's crtc_clock vs. clock during state computation
  2025-11-27 17:49 ` [PATCH 05/50] drm/i915/dp: Use a mode's crtc_clock vs. clock during state computation Imre Deak
@ 2025-12-09 12:51   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-09 12:51 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> The encoder state computation should use the
> drm_display_mode::crtc_clock member, instead of the clock member, the
> former one possibly having a necessary adjustment wrt. to the latter
> due to driver specific constraints. In practice the two values should
> not differ at spots changed in this patch, since only MSO and 3D modes
> would make them different, neither MSO or 3D relevant here, but still
> use the expected crtc_clock version for consistency.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index beda340d05923..d70cb35cf68bc 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2050,7 +2050,8 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
>  					continue;
>  			} else {
>  				if (!is_bw_sufficient_for_dsc_config(dsc_bpp_x16, link_rate,
> -								     lane_count, adjusted_mode->clock,
> +								     lane_count,
> +								     adjusted_mode->crtc_clock,
>  								     pipe_config->output_format,
>  								     timeslots))
>  					continue;
> @@ -2211,7 +2212,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
>  	int bpp_x16;
>  	int ret;
>  
> -	dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, adjusted_mode->clock,
> +	dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, adjusted_mode->crtc_clock,
>  								adjusted_mode->hdisplay,
>  								num_joined_pipes);
>  	max_bpp_x16 = min(fxp_q4_from_int(dsc_joiner_max_bpp), limits->link.max_bpp_x16);

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 06/50] drm/i915/dp: Factor out intel_dp_link_bw_overhead()
  2025-11-27 17:49 ` [PATCH 06/50] drm/i915/dp: Factor out intel_dp_link_bw_overhead() Imre Deak
@ 2025-12-09 12:52   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-09 12:52 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out intel_dp_link_bw_overhead(), used later for BW calculation
> during DP SST mode validation and state computation.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 26 +++++++++++++++++++++
>  drivers/gpu/drm/i915/display/intel_dp.h     |  2 ++
>  drivers/gpu/drm/i915/display/intel_dp_mst.c | 22 +++++------------
>  3 files changed, 34 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index d70cb35cf68bc..4722ee26b1181 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -424,6 +424,32 @@ static int intel_dp_min_lane_count(struct intel_dp *intel_dp)
>  	return 1;
>  }
>  
> +int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
> +			      int dsc_slice_count, int bpp_x16, unsigned long flags)
> +{
> +	int overhead;
> +
> +	WARN_ON(flags & ~(DRM_DP_BW_OVERHEAD_MST | DRM_DP_BW_OVERHEAD_SSC_REF_CLK |
> +			  DRM_DP_BW_OVERHEAD_FEC));
> +
> +	if (drm_dp_is_uhbr_rate(link_clock))
> +		flags |= DRM_DP_BW_OVERHEAD_UHBR;
> +
> +	if (dsc_slice_count)
> +		flags |= DRM_DP_BW_OVERHEAD_DSC;
> +
> +	overhead = drm_dp_bw_overhead(lane_count, hdisplay,
> +				      dsc_slice_count,
> +				      bpp_x16,
> +				      flags);
> +
> +	/*
> +	 * TODO: clarify whether a minimum required by the fixed FEC overhead
> +	 * in the bspec audio programming sequence is required here.
> +	 */
> +	return max(overhead, intel_dp_bw_fec_overhead(flags & DRM_DP_BW_OVERHEAD_FEC));
> +}
> +
>  /*
>   * The required data bandwidth for a mode with given pixel clock and bpp. This
>   * is the required net bandwidth independent of the data bandwidth efficiency.
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
> index 97e361458f760..d7f9410129f49 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -117,6 +117,8 @@ void intel_dp_compute_rate(struct intel_dp *intel_dp, int port_clock,
>  bool intel_dp_source_supports_tps3(struct intel_display *display);
>  bool intel_dp_source_supports_tps4(struct intel_display *display);
>  
> +int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
> +			      int dsc_slice_count, int bpp_x16, unsigned long flags);
>  int intel_dp_link_required(int pixel_clock, int bpp);
>  int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
>  				 int bw_overhead);
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 1a4784f0cd6bd..c1058b4a85d02 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -180,26 +180,16 @@ static int intel_dp_mst_bw_overhead(const struct intel_crtc_state *crtc_state,
>  	const struct drm_display_mode *adjusted_mode =
>  		&crtc_state->hw.adjusted_mode;
>  	unsigned long flags = DRM_DP_BW_OVERHEAD_MST;
> -	int overhead;
>  
> -	flags |= intel_dp_is_uhbr(crtc_state) ? DRM_DP_BW_OVERHEAD_UHBR : 0;
>  	flags |= ssc ? DRM_DP_BW_OVERHEAD_SSC_REF_CLK : 0;
>  	flags |= crtc_state->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
>  
> -	if (dsc_slice_count)
> -		flags |= DRM_DP_BW_OVERHEAD_DSC;
> -
> -	overhead = drm_dp_bw_overhead(crtc_state->lane_count,
> -				      adjusted_mode->hdisplay,
> -				      dsc_slice_count,
> -				      bpp_x16,
> -				      flags);
> -
> -	/*
> -	 * TODO: clarify whether a minimum required by the fixed FEC overhead
> -	 * in the bspec audio programming sequence is required here.
> -	 */
> -	return max(overhead, intel_dp_bw_fec_overhead(crtc_state->fec_enable));
> +	return intel_dp_link_bw_overhead(crtc_state->port_clock,
> +					 crtc_state->lane_count,
> +					 adjusted_mode->hdisplay,
> +					 dsc_slice_count,
> +					 bpp_x16,
> +					 flags);
>  }
>  
>  static void intel_dp_mst_compute_m_n(const struct intel_crtc_state *crtc_state,

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 07/50] drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config()
  2025-11-27 17:49 ` [PATCH 07/50] drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config() Imre Deak
@ 2025-12-09 12:53   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-09 12:53 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> is_bw_sufficient_for_dsc_config() should return true if the required BW
> equals the available BW, make it so.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 4722ee26b1181..4556a57db7c02 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2029,7 +2029,7 @@ static bool is_bw_sufficient_for_dsc_config(int dsc_bpp_x16, u32 link_clock,
>  	available_bw = (link_clock * lane_count * timeslots * 16)  / 8;
>  	required_bw = dsc_bpp_x16 * (intel_dp_mode_to_fec_clock(mode_clock));
>  
> -	return available_bw > required_bw;
> +	return available_bw >= required_bw;
>  }
>  
>  static int dsc_compute_link_config(struct intel_dp *intel_dp,

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 40/50] drm/i915/dp: Track the detailed DSC slice configuration
  2025-11-27 17:50 ` [PATCH 40/50] drm/i915/dp: " Imre Deak
@ 2025-12-09 14:06   ` Hogander, Jouni
  2025-12-09 14:30     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 14:06 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Add tracking for the DP DSC pipes-per-line and slices-per-stream
> value
> in the slice config state and compute the current slices-per-line
> (slice_count) value using this slice config. The slices-per-line
> value
> used atm will be removed by a follow-up change after converting all
> the
> places using it to use the slice config instead.
> 
> For now the slices-per-stream value is calculated based on the
> slices-per-line value (slice_count) calculated by the
> drm_dp_dsc_sink_max_slice_count() / intel_dp_dsc_get_slice_count()
> functions. In a follow-up change these functions will be converted to
> calculate the slices-per-stream value directly, along with the
> detailed
> slice configuration.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 32 ++++++++++++++++-------
> --
>  1 file changed, 21 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 03266511841e2..d17afc18fcfa7 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2356,6 +2356,7 @@ int intel_dp_dsc_compute_config(struct intel_dp
> *intel_dp,
>  		&pipe_config->hw.adjusted_mode;
>  	int num_joined_pipes =
> intel_crtc_num_joined_pipes(pipe_config);
>  	bool is_mst = intel_crtc_has_type(pipe_config,
> INTEL_OUTPUT_DP_MST);
> +	int slices_per_line;

Why you are not using obvious name for this: slice_count ?

>  	int ret;
>  
>  	/*
> @@ -2383,30 +2384,26 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  
>  	/* Calculate Slice count */
>  	if (intel_dp_is_edp(intel_dp)) {
> -		pipe_config->dsc.slice_count =
> +		slices_per_line =
>  			drm_dp_dsc_sink_max_slice_count(connector-
> >dp.dsc_dpcd,
>  							true);
> -		if (!pipe_config->dsc.slice_count) {
> +		if (!slices_per_line) {
>  			drm_dbg_kms(display->drm,
>  				    "Unsupported Slice Count %d\n",
> -				    pipe_config->dsc.slice_count);
> +				    slices_per_line);
>  			return -EINVAL;
>  		}
>  	} else {
> -		u8 dsc_dp_slice_count;
> -
> -		dsc_dp_slice_count =
> +		slices_per_line =
>  			intel_dp_dsc_get_slice_count(connector,
>  						     adjusted_mode-
> >crtc_clock,
>  						     adjusted_mode-
> >crtc_hdisplay,
>  						    
> num_joined_pipes);
> -		if (!dsc_dp_slice_count) {
> +		if (!slices_per_line) {
>  			drm_dbg_kms(display->drm,
>  				    "Compressed Slice Count not
> supported\n");
>  			return -EINVAL;
>  		}

You could share handling of !slices_per_line for DP and eDP.

BR,

Jouni Högander

> -
> -		pipe_config->dsc.slice_count = dsc_dp_slice_count;
>  	}
>  	/*
>  	 * VDSC engine operates at 1 Pixel per clock, so if peak
> pixel rate
> @@ -2415,14 +2412,27 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  	 * In case of Ultrajoiner along with 12 slices we need to
> use 3
>  	 * VDSC instances.
>  	 */
> +	pipe_config->dsc.slice_config.pipes_per_line =
> num_joined_pipes;
> +
>  	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
> -	    pipe_config->dsc.slice_count == 12)
> +	    slices_per_line == 12)
>  		pipe_config->dsc.slice_config.streams_per_pipe = 3;
> -	else if (pipe_config->joiner_pipes || pipe_config-
> >dsc.slice_count > 1)
> +	else if (pipe_config->joiner_pipes || slices_per_line > 1)
>  		pipe_config->dsc.slice_config.streams_per_pipe = 2;
>  	else
>  		pipe_config->dsc.slice_config.streams_per_pipe = 1;
>  
> +	pipe_config->dsc.slice_config.slices_per_stream =
> +		slices_per_line /
> +		pipe_config->dsc.slice_config.pipes_per_line /
> +		pipe_config->dsc.slice_config.streams_per_pipe;
> +
> +	pipe_config->dsc.slice_count =
> +		intel_dsc_line_slice_count(&pipe_config-
> >dsc.slice_config);
> +
> +	drm_WARN_ON(display->drm,
> +		    pipe_config->dsc.slice_count !=
> slices_per_line);
> +
>  	ret = intel_dp_dsc_compute_params(connector, pipe_config);
>  	if (ret < 0) {
>  		drm_dbg_kms(display->drm,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 40/50] drm/i915/dp: Track the detailed DSC slice configuration
  2025-12-09 14:06   ` Hogander, Jouni
@ 2025-12-09 14:30     ` Imre Deak
  2025-12-09 17:50       ` Hogander, Jouni
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-09 14:30 UTC (permalink / raw)
  To: Jouni Hogander
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Tue, Dec 09, 2025 at 04:06:53PM +0200, Jouni Hogander wrote:
> On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:

> > Add tracking for the DP DSC pipes-per-line and slices-per-stream
> > value in the slice config state and compute the current
> > slices-per-line (slice_count) value using this slice config. The
> > slices-per-line value used atm will be removed by a follow-up change
> > after converting all the places using it to use the slice config
> > instead.
> > 
> > For now the slices-per-stream value is calculated based on the
> > slices-per-line value (slice_count) calculated by the
> > drm_dp_dsc_sink_max_slice_count() / intel_dp_dsc_get_slice_count()
> > functions. In a follow-up change these functions will be converted
> > to calculate the slices-per-stream value directly, along with the
> > detailed slice configuration.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 32 ++++++++++++++++-------
> > --
> >  1 file changed, 21 insertions(+), 11 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 03266511841e2..d17afc18fcfa7 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -2356,6 +2356,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> >  		&pipe_config->hw.adjusted_mode;
> >  	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
> >  	bool is_mst = intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST);
> > +	int slices_per_line;
> 
> Why you are not using obvious name for this: slice_count ?

slice_count is not obvious imo. It could mean the number of slices per
line/pipe/stream. It's the first one reported by the sink.

> 
> >  	int ret;
> >  
> >  	/*
> > @@ -2383,30 +2384,26 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> >  
> >  	/* Calculate Slice count */
> >  	if (intel_dp_is_edp(intel_dp)) {
> > -		pipe_config->dsc.slice_count =
> > +		slices_per_line =
> >  			drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
> >  							true);
> > -		if (!pipe_config->dsc.slice_count) {
> > +		if (!slices_per_line) {
> >  			drm_dbg_kms(display->drm,
> >  				    "Unsupported Slice Count %d\n",
> > -				    pipe_config->dsc.slice_count);
> > +				    slices_per_line);
> >  			return -EINVAL;
> >  		}
> >  	} else {
> > -		u8 dsc_dp_slice_count;
> > -
> > -		dsc_dp_slice_count =
> > +		slices_per_line =
> >  			intel_dp_dsc_get_slice_count(connector,
> >  						     adjusted_mode->crtc_clock,
> >  						     adjusted_mode->crtc_hdisplay,
> >						     num_joined_pipes);
> > -		if (!dsc_dp_slice_count) {
> > +		if (!slices_per_line) {
> >  			drm_dbg_kms(display->drm,
> >  				    "Compressed Slice Count not supported\n");
> >  			return -EINVAL;
> >  		}
> 
> You could share handling of !slices_per_line for DP and eDP.

I do that later in
"Unify DP and eDP slice count computation"

leaving the changes in this patch simple.

> 
> BR,
> 
> Jouni Högander

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 41/50] drm/i915/dsc: Switch to using intel_dsc_line_slice_count()
  2025-11-27 17:50 ` [PATCH 41/50] drm/i915/dsc: Switch to using intel_dsc_line_slice_count() Imre Deak
@ 2025-12-09 17:14   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 17:14 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> By now all the places are updated to track the DSC slice
> configuration
> in intel_crtc_state::dsc.slice_config, so calculate the slices-per-
> line
> value using that config, instead of using
> intel_crtc_state::dsc.slice_count caching the same value and remove
> the cached slice_count.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_bios.c          |  6 ++----
>  drivers/gpu/drm/i915/display/intel_display_types.h |  1 -
>  drivers/gpu/drm/i915/display/intel_dp.c            | 11 +++++------
>  drivers/gpu/drm/i915/display/intel_vdsc.c          |  7 ++++---
>  4 files changed, 11 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_bios.c
> b/drivers/gpu/drm/i915/display/intel_bios.c
> index 479c5f0158800..698e569a48e61 100644
> --- a/drivers/gpu/drm/i915/display/intel_bios.c
> +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> @@ -3538,14 +3538,12 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  		crtc_state->dsc.slice_config.slices_per_stream = 1;
>  	}
>  
> -	crtc_state->dsc.slice_count =
> intel_dsc_line_slice_count(&crtc_state->dsc.slice_config);
> -
>  	if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
> -	    crtc_state->dsc.slice_count != 0)
> +	    intel_dsc_line_slice_count(&crtc_state-
> >dsc.slice_config) != 0)
>  		drm_dbg_kms(display->drm,
>  			    "VBT: DSC hdisplay %d not divisible by
> slice count %d\n",
>  			    crtc_state-
> >hw.adjusted_mode.crtc_hdisplay,
> -			    crtc_state->dsc.slice_count);
> +			    intel_dsc_line_slice_count(&crtc_state-
> >dsc.slice_config));
>  
>  	/*
>  	 * The VBT rc_buffer_block_size and rc_buffer_size
> definitions
> diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h
> b/drivers/gpu/drm/i915/display/intel_display_types.h
> index 574fc7ff33c97..0f56be61f081b 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_types.h
> +++ b/drivers/gpu/drm/i915/display/intel_display_types.h
> @@ -1312,7 +1312,6 @@ struct intel_crtc_state {
>  		} slice_config;
>  		/* Compressed Bpp in U6.4 format (first 4 bits for
> fractional part) */
>  		u16 compressed_bpp_x16;
> -		u8 slice_count;
>  		struct drm_dsc_config config;
>  	} dsc;
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index d17afc18fcfa7..126048c5233c4 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2031,12 +2031,14 @@ static int dsc_compute_link_config(struct
> intel_dp *intel_dp,
>  			} else {
>  				unsigned long bw_overhead_flags =
>  					pipe_config->fec_enable ?
> DRM_DP_BW_OVERHEAD_FEC : 0;
> +				int line_slice_count =
> +					intel_dsc_line_slice_count(&
> pipe_config->dsc.slice_config);
>  
>  				if
> (!is_bw_sufficient_for_dsc_config(intel_dp,
>  								    
> link_rate, lane_count,
>  								    
> adjusted_mode->crtc_clock,
>  								    
> adjusted_mode->hdisplay,
> -								    
> pipe_config->dsc.slice_count,
> +								    
> line_slice_count,
>  								    
> dsc_bpp_x16,
>  								    
> bw_overhead_flags))
>  					continue;
> @@ -2427,11 +2429,8 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  		pipe_config->dsc.slice_config.pipes_per_line /
>  		pipe_config->dsc.slice_config.streams_per_pipe;
>  
> -	pipe_config->dsc.slice_count =
> -		intel_dsc_line_slice_count(&pipe_config-
> >dsc.slice_config);
> -
>  	drm_WARN_ON(display->drm,
> -		    pipe_config->dsc.slice_count !=
> slices_per_line);
> +		    intel_dsc_line_slice_count(&pipe_config-
> >dsc.slice_config) != slices_per_line);
>  
>  	ret = intel_dp_dsc_compute_params(connector, pipe_config);
>  	if (ret < 0) {
> @@ -2449,7 +2448,7 @@ int intel_dp_dsc_compute_config(struct intel_dp
> *intel_dp,
>  		    "Compressed Bpp = " FXP_Q4_FMT " Slice Count =
> %d\n",
>  		    pipe_config->pipe_bpp,
>  		    FXP_Q4_ARGS(pipe_config-
> >dsc.compressed_bpp_x16),
> -		    pipe_config->dsc.slice_count);
> +		    intel_dsc_line_slice_count(&pipe_config-
> >dsc.slice_config));
>  
>  	return 0;
>  }
> diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c
> b/drivers/gpu/drm/i915/display/intel_vdsc.c
> index 2b27671f97b32..190ce567bc7fa 100644
> --- a/drivers/gpu/drm/i915/display/intel_vdsc.c
> +++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
> @@ -283,8 +283,9 @@ int intel_dsc_compute_params(struct
> intel_crtc_state *pipe_config)
>  	int ret;
>  
>  	vdsc_cfg->pic_width = pipe_config-
> >hw.adjusted_mode.crtc_hdisplay;
> -	vdsc_cfg->slice_width = DIV_ROUND_UP(vdsc_cfg->pic_width,
> -					     pipe_config-
> >dsc.slice_count);
> +	vdsc_cfg->slice_width =
> +		DIV_ROUND_UP(vdsc_cfg->pic_width,
> +			    
> intel_dsc_line_slice_count(&pipe_config->dsc.slice_config));
>  
>  	err = intel_dsc_slice_dimensions_valid(pipe_config,
> vdsc_cfg);
>  
> @@ -1042,7 +1043,7 @@ static void intel_vdsc_dump_state(struct
> drm_printer *p, int indent,
>  	drm_printf_indent(p, indent,
>  			  "dsc-dss: compressed-bpp:" FXP_Q4_FMT ",
> slice-count: %d, num_streams: %d\n",
>  			  FXP_Q4_ARGS(crtc_state-
> >dsc.compressed_bpp_x16),
> -			  crtc_state->dsc.slice_count,
> +			  intel_dsc_line_slice_count(&crtc_state-
> >dsc.slice_config),
>  			  crtc_state-
> >dsc.slice_config.streams_per_pipe);
>  }
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 42/50] drm/i915/dp: Factor out intel_dp_dsc_min_slice_count()
  2025-11-27 17:50 ` [PATCH 42/50] drm/i915/dp: Factor out intel_dp_dsc_min_slice_count() Imre Deak
@ 2025-12-09 17:26   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 17:26 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Factor out intel_dp_dsc_min_slice_count() for making
> intel_dp_dsc_get_slice_count() more readable and also to prepare for
> a
> follow-up change unifying the eDP and DP slice count/config
> computation.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++++++++++++++------
>  1 file changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 126048c5233c4..79b87bc041a75 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -958,14 +958,11 @@ u32 get_max_compressed_bpp_with_joiner(struct
> intel_display *display,
>  	return max_bpp;
>  }
>  
> -u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
> -				int mode_clock, int mode_hdisplay,
> -				int num_joined_pipes)
> +static int intel_dp_dsc_min_slice_count(const struct intel_connector
> *connector,
> +					int mode_clock, int
> mode_hdisplay)
>  {
>  	struct intel_display *display = to_intel_display(connector);
> -	u32 sink_slice_count_mask =
> -		drm_dp_dsc_sink_slice_count_mask(connector-
> >dp.dsc_dpcd, false);
> -	u8 min_slice_count, i;
> +	u8 min_slice_count;
>  	int max_slice_width;
>  	int tp_rgb_yuv444;
>  	int tp_yuv422_420;
> @@ -1024,6 +1021,20 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  				DIV_ROUND_UP(mode_hdisplay,
>  					     max_slice_width));
>  
> +	return min_slice_count;
> +}
> +
> +u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
> +				int mode_clock, int mode_hdisplay,
> +				int num_joined_pipes)
> +{
> +	struct intel_display *display = to_intel_display(connector);
> +	int min_slice_count =
> +		intel_dp_dsc_min_slice_count(connector, mode_clock,
> mode_hdisplay);
> +	u32 sink_slice_count_mask =
> +		drm_dp_dsc_sink_slice_count_mask(connector-
> >dp.dsc_dpcd, false);
> +	int i;
> +
>  	/* Find the closest match to the valid slice count values */
>  	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
>  		u8 test_slice_count = valid_dsc_slicecount[i] *
> num_joined_pipes;


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 43/50] drm/i915/dp: Use int for DSC slice count variables
  2025-11-27 17:50 ` [PATCH 43/50] drm/i915/dp: Use int for DSC slice count variables Imre Deak
@ 2025-12-09 17:30   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 17:30 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> There is no reason to use the more specific u8 type for slice count
> variables, use the more generic int type instead.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 11 +++++------
>  1 file changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 79b87bc041a75..1d9a130bd4060 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -962,7 +962,7 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  					int mode_clock, int
> mode_hdisplay)
>  {
>  	struct intel_display *display = to_intel_display(connector);
> -	u8 min_slice_count;
> +	int min_slice_count;
>  	int max_slice_width;
>  	int tp_rgb_yuv444;
>  	int tp_yuv422_420;
> @@ -1007,7 +1007,7 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  	 * slice and VDSC engine, whenever we approach close enough
> to max CDCLK
>  	 */
>  	if (mode_clock >= ((display->cdclk.max_cdclk_freq * 85) /
> 100))
> -		min_slice_count = max_t(u8, min_slice_count, 2);
> +		min_slice_count = max(min_slice_count, 2);
>  
>  	max_slice_width = drm_dp_dsc_sink_max_slice_width(connector-
> >dp.dsc_dpcd);
>  	if (max_slice_width < DP_DSC_MIN_SLICE_WIDTH_VALUE) {
> @@ -1017,9 +1017,8 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  		return 0;
>  	}
>  	/* Also take into account max slice width */
> -	min_slice_count = max_t(u8, min_slice_count,
> -				DIV_ROUND_UP(mode_hdisplay,
> -					     max_slice_width));
> +	min_slice_count = max(min_slice_count,
> +			      DIV_ROUND_UP(mode_hdisplay,
> max_slice_width));
>  
>  	return min_slice_count;
>  }
> @@ -1037,7 +1036,7 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  
>  	/* Find the closest match to the valid slice count values */
>  	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
> -		u8 test_slice_count = valid_dsc_slicecount[i] *
> num_joined_pipes;
> +		int test_slice_count = valid_dsc_slicecount[i] *
> num_joined_pipes;
>  
>  		/*
>  		 * 3 DSC Slices per pipe need 3 DSC engines, which
> is supported only


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 44/50] drm/i915/dp: Rename test_slice_count to slices_per_line
  2025-11-27 17:50 ` [PATCH 44/50] drm/i915/dp: Rename test_slice_count to slices_per_line Imre Deak
@ 2025-12-09 17:34   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 17:34 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Rename test_slice_count to slices_per_line for clarity.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 1d9a130bd4060..650b339fd73bc 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1036,7 +1036,7 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  
>  	/* Find the closest match to the valid slice count values */
>  	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
> -		int test_slice_count = valid_dsc_slicecount[i] *
> num_joined_pipes;
> +		int slices_per_line = valid_dsc_slicecount[i] *
> num_joined_pipes;
>  
>  		/*
>  		 * 3 DSC Slices per pipe need 3 DSC engines, which
> is supported only
> @@ -1046,7 +1046,7 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes
> != 4))
>  			continue;
>  
> -		if
> (!(drm_dp_dsc_slice_count_to_mask(test_slice_count) &
> +		if
> (!(drm_dp_dsc_slice_count_to_mask(slices_per_line) &
>  		      sink_slice_count_mask))
>  			continue;
>  
> @@ -1058,11 +1058,11 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		if (num_joined_pipes > 1 && valid_dsc_slicecount[i]
> < 2)
>  			continue;
>  
> -		if (mode_hdisplay % test_slice_count)
> +		if (mode_hdisplay % slices_per_line)
>  			continue;
>  
> -		if (min_slice_count <= test_slice_count)
> -			return test_slice_count;
> +		if (min_slice_count <= slices_per_line)
> +			return slices_per_line;
>  	}
>  
>  	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in
> the mask. */


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 40/50] drm/i915/dp: Track the detailed DSC slice configuration
  2025-12-09 14:30     ` Imre Deak
@ 2025-12-09 17:50       ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-09 17:50 UTC (permalink / raw)
  To: Deak, Imre
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Tue, 2025-12-09 at 16:30 +0200, Imre Deak wrote:
> On Tue, Dec 09, 2025 at 04:06:53PM +0200, Jouni Hogander wrote:
> > On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> 
> > > Add tracking for the DP DSC pipes-per-line and slices-per-stream
> > > value in the slice config state and compute the current
> > > slices-per-line (slice_count) value using this slice config. The
> > > slices-per-line value used atm will be removed by a follow-up
> > > change
> > > after converting all the places using it to use the slice config
> > > instead.
> > > 
> > > For now the slices-per-stream value is calculated based on the
> > > slices-per-line value (slice_count) calculated by the
> > > drm_dp_dsc_sink_max_slice_count() /
> > > intel_dp_dsc_get_slice_count()
> > > functions. In a follow-up change these functions will be
> > > converted
> > > to calculate the slices-per-stream value directly, along with the
> > > detailed slice configuration.
> > > 
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_dp.c | 32 ++++++++++++++++---
> > > ----
> > > --
> > >  1 file changed, 21 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > > b/drivers/gpu/drm/i915/display/intel_dp.c
> > > index 03266511841e2..d17afc18fcfa7 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > @@ -2356,6 +2356,7 @@ int intel_dp_dsc_compute_config(struct
> > > intel_dp *intel_dp,
> > >  		&pipe_config->hw.adjusted_mode;
> > >  	int num_joined_pipes =
> > > intel_crtc_num_joined_pipes(pipe_config);
> > >  	bool is_mst = intel_crtc_has_type(pipe_config,
> > > INTEL_OUTPUT_DP_MST);
> > > +	int slices_per_line;
> > 
> > Why you are not using obvious name for this: slice_count ?
> 
> slice_count is not obvious imo. It could mean the number of slices
> per
> line/pipe/stream. It's the first one reported by the sink.

ok

> 
> > 
> > >  	int ret;
> > >  
> > >  	/*
> > > @@ -2383,30 +2384,26 @@ int intel_dp_dsc_compute_config(struct
> > > intel_dp *intel_dp,
> > >  
> > >  	/* Calculate Slice count */
> > >  	if (intel_dp_is_edp(intel_dp)) {
> > > -		pipe_config->dsc.slice_count =
> > > +		slices_per_line =
> > >  			drm_dp_dsc_sink_max_slice_count(connecto
> > > r->dp.dsc_dpcd,
> > >  							true);
> > > -		if (!pipe_config->dsc.slice_count) {
> > > +		if (!slices_per_line) {
> > >  			drm_dbg_kms(display->drm,
> > >  				    "Unsupported Slice Count
> > > %d\n",
> > > -				    pipe_config-
> > > >dsc.slice_count);
> > > +				    slices_per_line);
> > >  			return -EINVAL;
> > >  		}
> > >  	} else {
> > > -		u8 dsc_dp_slice_count;
> > > -
> > > -		dsc_dp_slice_count =
> > > +		slices_per_line =
> > >  			intel_dp_dsc_get_slice_count(connector,
> > >  						    
> > > adjusted_mode->crtc_clock,
> > >  						    
> > > adjusted_mode->crtc_hdisplay,
> > > 						    
> > > num_joined_pipes);
> > > -		if (!dsc_dp_slice_count) {
> > > +		if (!slices_per_line) {
> > >  			drm_dbg_kms(display->drm,
> > >  				    "Compressed Slice Count not
> > > supported\n");
> > >  			return -EINVAL;
> > >  		}
> > 
> > You could share handling of !slices_per_line for DP and eDP.
> 
> I do that later in
> "Unify DP and eDP slice count computation"
> 
> leaving the changes in this patch simple.

Ok, thank you for pointing it out:

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> 
> > 
> > BR,
> > 
> > Jouni Högander


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 45/50] drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe iteration
  2025-11-27 17:50 ` [PATCH 45/50] drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe iteration Imre Deak
@ 2025-12-10 12:38   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-10 12:38 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Simplify the slice config loop in intel_dp_dsc_get_slice_count(),
> using
> the loop iterator as the slices-per-pipe value directly, instead of
> looking up the same value from an array.
> 
> While at it move the code comment about the slice configuration
> closer
> to where the configuration is determined and clarify it a bit.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 33 ++++++++++-------------
> --
>  1 file changed, 13 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 650b339fd73bc..a4ff1ffc8f7d4 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -107,20 +107,6 @@
>  /* Constants for DP DSC configurations */
>  static const u8 valid_dsc_bpp[] = {6, 8, 10, 12, 15};
>  
> -/*
> - * With Single pipe configuration, HW is capable of supporting
> maximum of:
> - * 2 slices per line for ICL, BMG
> - * 4 slices per line for other platforms.
> - * For now consider a max of 2 slices per line, which works for all
> platforms.
> - * With this we can have max of 4 DSC Slices per pipe.
> - *
> - * For higher resolutions where 12 slice support is required with
> - * ultrajoiner, only then each pipe can support 3 slices.
> - *
> - * #TODO Split this better to use 4 slices/dsc engine where
> supported.
> - */
> -static const u8 valid_dsc_slicecount[] = {1, 2, 3, 4};
> -
>  /**
>   * intel_dp_is_edp - is the given port attached to an eDP panel
> (either CPU or PCH)
>   * @intel_dp: DP struct
> @@ -1032,17 +1018,24 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		intel_dp_dsc_min_slice_count(connector, mode_clock,
> mode_hdisplay);
>  	u32 sink_slice_count_mask =
>  		drm_dp_dsc_sink_slice_count_mask(connector-
> >dp.dsc_dpcd, false);
> -	int i;
> +	int slices_per_pipe;
>  
> -	/* Find the closest match to the valid slice count values */
> -	for (i = 0; i < ARRAY_SIZE(valid_dsc_slicecount); i++) {
> -		int slices_per_line = valid_dsc_slicecount[i] *
> num_joined_pipes;
> +	/*
> +	 * Find the closest match to the valid slice count values
> +	 *
> +	 * Max HW DSC-per-pipe x slice-per-DSC (= slice-per-pipe)
> capability:
> +	 * ICL:  2x2
> +	 * BMG:  2x2, or for ultrajoined 4 pipes: 3x1
> +	 * TGL+: 2x4 (TODO: Add support for this)
> +	 */
> +	for (slices_per_pipe = 1; slices_per_pipe <= 4;
> slices_per_pipe++) {
> +		int slices_per_line = slices_per_pipe *
> num_joined_pipes;
>  
>  		/*
>  		 * 3 DSC Slices per pipe need 3 DSC engines, which
> is supported only
>  		 * with Ultrajoiner only for some platforms.
>  		 */
> -		if (valid_dsc_slicecount[i] == 3 &&
> +		if (slices_per_pipe == 3 &&
>  		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes
> != 4))
>  			continue;
>  
> @@ -1055,7 +1048,7 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		  * So there should be at least 2 dsc slices per
> pipe,
>  		  * whenever bigjoiner is enabled.
>  		  */
> -		if (num_joined_pipes > 1 && valid_dsc_slicecount[i]
> < 2)
> +		if (num_joined_pipes > 1 && slices_per_pipe < 2)
>  			continue;
>  
>  		if (mode_hdisplay % slices_per_line)


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 08/50] drm/i915/dp: Use the effective data rate for DP BW calculation
  2025-11-27 17:49 ` [PATCH 08/50] drm/i915/dp: Use the effective data rate for DP BW calculation Imre Deak
@ 2025-12-10 12:48   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-10 12:48 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Use intel_dp_effective_data_rate() to calculate the required link BW for
> eDP, DP-SST and MST links. This ensures that the BW is calculated the
> same way for all DP output types, during mode validation as well as
> during state computation. This approach also allows for accounting with
> BW overheads due to the SSC, DSC, FEC being enabled on a link, as well
> as due to the MST symbol alignment on the link. Accounting for these
> overheads will be added by follow-up changes.
> 
> This way also computes the stream BW on a UHBR link correctly, using the
> corresponding symbol size to effective data size ratio (i.e. ~97% link
> BW utilization for UHBR vs. only ~80% for non-UHBR).
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c       | 40 +++++++++++--------
>  drivers/gpu/drm/i915/display/intel_dp.h       |  4 +-
>  .../drm/i915/display/intel_dp_link_training.c |  4 +-
>  drivers/gpu/drm/i915/display/intel_dp_mst.c   |  4 +-
>  4 files changed, 33 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 4556a57db7c02..aa55a81a9a9bf 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -453,15 +453,15 @@ int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
>  /*
>   * The required data bandwidth for a mode with given pixel clock and bpp. This
>   * is the required net bandwidth independent of the data bandwidth efficiency.
> - *
> - * TODO: check if callers of this functions should use
> - * intel_dp_effective_data_rate() instead.
>   */
> -int
> -intel_dp_link_required(int pixel_clock, int bpp)
> +int intel_dp_link_required(int link_clock, int lane_count,
> +			   int mode_clock, int mode_hdisplay,
> +			   int link_bpp_x16, unsigned long bw_overhead_flags)
>  {
> -	/* pixel_clock is in kHz, divide bpp by 8 for bit to Byte conversion */
> -	return DIV_ROUND_UP(pixel_clock * bpp, 8);
> +	int bw_overhead = intel_dp_link_bw_overhead(link_clock, lane_count, mode_hdisplay,
> +						    0, link_bpp_x16, bw_overhead_flags);
> +
> +	return intel_dp_effective_data_rate(mode_clock, link_bpp_x16, bw_overhead);
>  }
>  
>  /**
> @@ -1531,7 +1531,9 @@ intel_dp_mode_valid(struct drm_connector *_connector,
>  	max_rate = intel_dp_max_link_data_rate(intel_dp, max_link_clock, max_lanes);
>  
>  	link_bpp_x16 = intel_dp_mode_min_link_bpp_x16(connector, mode);
> -	mode_rate = intel_dp_link_required(target_clock, fxp_q4_to_int_roundup(link_bpp_x16));
> +	mode_rate = intel_dp_link_required(max_link_clock, max_lanes,
> +					   target_clock, mode->hdisplay,
> +					   link_bpp_x16, 0);
>  
>  	if (intel_dp_has_dsc(connector)) {
>  		int pipe_bpp;
> @@ -1838,7 +1840,7 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  				  const struct link_config_limits *limits)
>  {
>  	int bpp, i, lane_count, clock = intel_dp_mode_clock(pipe_config, conn_state);
> -	int mode_rate, link_rate, link_avail;
> +	int link_rate, link_avail;
>  
>  	for (bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
>  	     bpp >= fxp_q4_to_int(limits->link.min_bpp_x16);
> @@ -1846,8 +1848,6 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  		int link_bpp_x16 =
>  			intel_dp_output_format_link_bpp_x16(pipe_config->output_format, bpp);
>  
> -		mode_rate = intel_dp_link_required(clock, fxp_q4_to_int_roundup(link_bpp_x16));
> -
>  		for (i = 0; i < intel_dp->num_common_rates; i++) {
>  			link_rate = intel_dp_common_rate(intel_dp, i);
>  			if (link_rate < limits->min_rate ||
> @@ -1857,11 +1857,17 @@ intel_dp_compute_link_config_wide(struct intel_dp *intel_dp,
>  			for (lane_count = limits->min_lane_count;
>  			     lane_count <= limits->max_lane_count;
>  			     lane_count <<= 1) {
> +				const struct drm_display_mode *adjusted_mode =
> +					&pipe_config->hw.adjusted_mode;
> +				int mode_rate =
> +					intel_dp_link_required(link_rate, lane_count,
> +							       clock, adjusted_mode->hdisplay,
> +							       link_bpp_x16, 0);
> +
>  				link_avail = intel_dp_max_link_data_rate(intel_dp,
>  									 link_rate,
>  									 lane_count);
>  
> -
>  				if (mode_rate <= link_avail) {
>  					pipe_config->lane_count = lane_count;
>  					pipe_config->pipe_bpp = bpp;
> @@ -2724,11 +2730,13 @@ int intel_dp_config_required_rate(const struct intel_crtc_state *crtc_state)
>  {
>  	const struct drm_display_mode *adjusted_mode =
>  		&crtc_state->hw.adjusted_mode;
> -	int bpp = crtc_state->dsc.compression_enable ?
> -		fxp_q4_to_int_roundup(crtc_state->dsc.compressed_bpp_x16) :
> -		crtc_state->pipe_bpp;
> +	int link_bpp_x16 = crtc_state->dsc.compression_enable ?
> +		crtc_state->dsc.compressed_bpp_x16 :
> +		fxp_q4_from_int(crtc_state->pipe_bpp);
>  
> -	return intel_dp_link_required(adjusted_mode->crtc_clock, bpp);
> +	return intel_dp_link_required(crtc_state->port_clock, crtc_state->lane_count,
> +				      adjusted_mode->crtc_clock, adjusted_mode->hdisplay,
> +				      link_bpp_x16, 0);
>  }
>  
>  bool intel_dp_joiner_needs_dsc(struct intel_display *display,
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h b/drivers/gpu/drm/i915/display/intel_dp.h
> index d7f9410129f49..30eebb8cad6d2 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -119,7 +119,9 @@ bool intel_dp_source_supports_tps4(struct intel_display *display);
>  
>  int intel_dp_link_bw_overhead(int link_clock, int lane_count, int hdisplay,
>  			      int dsc_slice_count, int bpp_x16, unsigned long flags);
> -int intel_dp_link_required(int pixel_clock, int bpp);
> +int intel_dp_link_required(int link_clock, int lane_count,
> +			   int mode_clock, int mode_hdisplay,
> +			   int link_bpp_x16, unsigned long bw_overhead_flags);
>  int intel_dp_effective_data_rate(int pixel_clock, int bpp_x16,
>  				 int bw_overhead);
>  int intel_dp_max_link_data_rate(struct intel_dp *intel_dp,
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> index aad5fe14962f9..54c585c59b900 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
> @@ -1195,7 +1195,9 @@ static bool intel_dp_can_link_train_fallback_for_edp(struct intel_dp *intel_dp,
>  		intel_panel_preferred_fixed_mode(intel_dp->attached_connector);
>  	int mode_rate, max_rate;
>  
> -	mode_rate = intel_dp_link_required(fixed_mode->clock, 18);
> +	mode_rate = intel_dp_link_required(link_rate, lane_count,
> +					   fixed_mode->clock, fixed_mode->hdisplay,
> +					   fxp_q4_from_int(18), 0);
>  	max_rate = intel_dp_max_link_data_rate(intel_dp, link_rate, lane_count);
>  	if (mode_rate > max_rate)
>  		return false;
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index c1058b4a85d02..e4dd6b4ca0512 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -1489,7 +1489,9 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
>  
>  	max_rate = intel_dp_max_link_data_rate(intel_dp,
>  					       max_link_clock, max_lanes);
> -	mode_rate = intel_dp_link_required(mode->clock, min_bpp);
> +	mode_rate = intel_dp_link_required(max_link_clock, max_lanes,
> +					   mode->clock, mode->hdisplay,
> +					   fxp_q4_from_int(min_bpp), 0);
>  
>  	/*
>  	 * TODO:

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation
  2025-11-27 17:49 ` [PATCH 09/50] drm/i915/dp: Use the effective data rate for DP compressed " Imre Deak
@ 2025-12-10 12:50   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-10 12:50 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Use intel_dp_effective_data_rate() to calculate the required link BW for
> compressed streams on non-UHBR DP-SST links. This ensures that the BW is
> calculated the same way for all DP output types and DSC/non-DSC modes,
> during mode validation as well as during state computation.
> 
> This approach also allows for accounting with BW overhead due to DSC,
> FEC being enabled on a link. Acounting for these will be added by
> follow-up changes.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 27 +++++++++++++++----------
>  1 file changed, 16 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index aa55a81a9a9bf..4044bdbceaef5 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2025,15 +2025,19 @@ static bool intel_dp_dsc_supports_format(const struct intel_connector *connector
>  	return drm_dp_dsc_sink_supports_format(connector->dp.dsc_dpcd, sink_dsc_format);
>  }
>  
> -static bool is_bw_sufficient_for_dsc_config(int dsc_bpp_x16, u32 link_clock,
> -					    u32 lane_count, u32 mode_clock,
> -					    enum intel_output_format output_format,
> -					    int timeslots)
> +static bool is_bw_sufficient_for_dsc_config(struct intel_dp *intel_dp,
> +					    int link_clock, int lane_count,
> +					    int mode_clock, int mode_hdisplay,
> +					    int dsc_slice_count, int link_bpp_x16,
> +					    unsigned long bw_overhead_flags)
>  {
> -	u32 available_bw, required_bw;
> +	int available_bw;
> +	int required_bw;
>  
> -	available_bw = (link_clock * lane_count * timeslots * 16)  / 8;
> -	required_bw = dsc_bpp_x16 * (intel_dp_mode_to_fec_clock(mode_clock));
> +	available_bw = intel_dp_max_link_data_rate(intel_dp, link_clock, lane_count);
> +	required_bw = intel_dp_link_required(link_clock, lane_count,
> +					     mode_clock, mode_hdisplay,
> +					     link_bpp_x16, bw_overhead_flags);
>  
>  	return available_bw >= required_bw;
>  }
> @@ -2081,11 +2085,12 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
>  				if (ret)
>  					continue;
>  			} else {
> -				if (!is_bw_sufficient_for_dsc_config(dsc_bpp_x16, link_rate,
> -								     lane_count,
> +				if (!is_bw_sufficient_for_dsc_config(intel_dp,
> +								     link_rate, lane_count,
>  								     adjusted_mode->crtc_clock,
> -								     pipe_config->output_format,
> -								     timeslots))
> +								     adjusted_mode->hdisplay,
> +								     pipe_config->dsc.slice_count,
> +								     dsc_bpp_x16, 0))
>  					continue;
>  			}
>  

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 10/50] drm/i915/dp: Account with MST, SSC BW overhead for uncompressed DP-MST stream BW
  2025-11-27 17:49 ` [PATCH 10/50] drm/i915/dp: Account with MST, SSC BW overhead for uncompressed DP-MST stream BW Imre Deak
@ 2025-12-10 13:08   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-10 13:08 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> On MST links the symbol alignment and SSC have a BW overhead, which
> should be accounted for when calculating the required stream BW, do so
> during mode validation for an uncompressed stream.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp_mst.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index e4dd6b4ca0512..0db6ed2d9664c 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -1458,6 +1458,8 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
>  	const int min_bpp = 18;
>  	int max_dotclk = display->cdclk.max_dotclk_freq;
>  	int max_rate, mode_rate, max_lanes, max_link_clock;
> +	unsigned long bw_overhead_flags =
> +		DRM_DP_BW_OVERHEAD_MST | DRM_DP_BW_OVERHEAD_SSC_REF_CLK;
>  	int ret;
>  	bool dsc = false;
>  	u16 dsc_max_compressed_bpp = 0;
> @@ -1491,7 +1493,8 @@ mst_connector_mode_valid_ctx(struct drm_connector *_connector,
>  					       max_link_clock, max_lanes);
>  	mode_rate = intel_dp_link_required(max_link_clock, max_lanes,
>  					   mode->clock, mode->hdisplay,
> -					   fxp_q4_from_int(min_bpp), 0);
> +					   fxp_q4_from_int(min_bpp),
> +					   bw_overhead_flags);
>  
>  	/*
>  	 * TODO:

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 11/50] drm/i915/dp: Account with DSC BW overhead for compressed DP-SST stream BW
  2025-11-27 17:49 ` [PATCH 11/50] drm/i915/dp: Account with DSC BW overhead for compressed DP-SST " Imre Deak
@ 2025-12-10 13:39   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-10 13:39 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> A DSC compressed stream requires FEC (except for eDP), which has a BW
> overhead on non-UHBR links that must be accounted for explicitly. Do
> that during computing the required BW.
> 
> Note that the overhead doesn't need to be accounted for on UHBR links
> where FEC is always enabled and so the corresponding overhead is part of
> the channel coding efficiency instead (i.e. the overhead is part of the
> available vs. the required BW).
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 4044bdbceaef5..55be648283b19 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2085,12 +2085,16 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
>  				if (ret)
>  					continue;
>  			} else {
> +				unsigned long bw_overhead_flags =
> +					pipe_config->fec_enable ? DRM_DP_BW_OVERHEAD_FEC : 0;
> +
>  				if (!is_bw_sufficient_for_dsc_config(intel_dp,
>  								     link_rate, lane_count,
>  								     adjusted_mode->crtc_clock,
>  								     adjusted_mode->hdisplay,
>  								     pipe_config->dsc.slice_count,
> -								     dsc_bpp_x16, 0))
> +								     dsc_bpp_x16,
> +								     bw_overhead_flags))
>  					continue;
>  			}
>  

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 46/50] drm/i915/dsc: Add intel_dsc_get_slice_config()
  2025-11-27 17:50 ` [PATCH 46/50] drm/i915/dsc: Add intel_dsc_get_slice_config() Imre Deak
@ 2025-12-10 14:06   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-10 14:06 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Add intel_dsc_get_slice_config() and move the logic to select a given
> slice configuration to that function from the configuration loop in
> intel_dp_dsc_get_slice_count(). The same functionality can be used by
> other outputs like DSI as well, done as a follow-up.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c   | 22 ++++-------
>  drivers/gpu/drm/i915/display/intel_vdsc.c | 48
> +++++++++++++++++++++++
>  drivers/gpu/drm/i915/display/intel_vdsc.h |  4 ++
>  3 files changed, 59 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index a4ff1ffc8f7d4..461f80bd54cbf 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1029,28 +1029,20 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  	 * TGL+: 2x4 (TODO: Add support for this)
>  	 */
>  	for (slices_per_pipe = 1; slices_per_pipe <= 4;
> slices_per_pipe++) {
> -		int slices_per_line = slices_per_pipe *
> num_joined_pipes;
> +		struct intel_dsc_slice_config config;
> +		int slices_per_line;
>  
> -		/*
> -		 * 3 DSC Slices per pipe need 3 DSC engines, which
> is supported only
> -		 * with Ultrajoiner only for some platforms.
> -		 */
> -		if (slices_per_pipe == 3 &&
> -		    (!HAS_DSC_3ENGINES(display) || num_joined_pipes
> != 4))
> +		if (!intel_dsc_get_slice_config(display,
> +						num_joined_pipes,
> slices_per_pipe,
> +						&config))
>  			continue;
>  
> +		slices_per_line =
> intel_dsc_line_slice_count(&config);
> +
>  		if
> (!(drm_dp_dsc_slice_count_to_mask(slices_per_line) &
>  		      sink_slice_count_mask))
>  			continue;
>  
> -		 /*
> -		  * Bigjoiner needs small joiner to be enabled.
> -		  * So there should be at least 2 dsc slices per
> pipe,
> -		  * whenever bigjoiner is enabled.
> -		  */
> -		if (num_joined_pipes > 1 && slices_per_pipe < 2)
> -			continue;
> -
>  		if (mode_hdisplay % slices_per_line)
>  			continue;
>  
> diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.c
> b/drivers/gpu/drm/i915/display/intel_vdsc.c
> index 190ce567bc7fa..9910134d52653 100644
> --- a/drivers/gpu/drm/i915/display/intel_vdsc.c
> +++ b/drivers/gpu/drm/i915/display/intel_vdsc.c
> @@ -40,6 +40,54 @@ int intel_dsc_line_slice_count(const struct
> intel_dsc_slice_config *config)
>  	return config->pipes_per_line * config->streams_per_pipe *
> config->slices_per_stream;
>  }
>  
> +bool intel_dsc_get_slice_config(struct intel_display *display,
> +				int pipes_per_line, int
> slices_per_pipe,
> +				struct intel_dsc_slice_config
> *config)
> +{
> +	int streams_per_pipe;
> +
> +	/* TODO: Add support for 8 slices per pipe on TGL+. */
> +	switch (slices_per_pipe) {
> +	case 3:
> +		/*
> +		 * 3 DSC Slices per pipe need 3 DSC engines, which
> is supported only
> +		 * with Ultrajoiner only for some platforms.
> +		 */
> +		if (!HAS_DSC_3ENGINES(display) || pipes_per_line !=
> 4)
> +			return false;
> +
> +		streams_per_pipe = 3;
> +		break;
> +	case 4:
> +		/* TODO: Consider using 1 DSC engine stream x 4
> slices instead. */
> +	case 2:
> +		/* TODO: Consider using 1 DSC engine stream x 2
> slices instead. */
> +		streams_per_pipe = 2;
> +		break;
> +	case 1:
> +		 /*
> +		  * Bigjoiner needs small joiner to be enabled.
> +		  * So there should be at least 2 dsc slices per
> pipe,
> +		  * whenever bigjoiner is enabled.
> +		  */
> +		if (pipes_per_line > 1)
> +			return false;
> +
> +		streams_per_pipe = 1;
> +		break;
> +	default:
> +		MISSING_CASE(slices_per_pipe);
> +		return false;
> +	}
> +
> +	config->pipes_per_line = pipes_per_line;
> +	config->streams_per_pipe = streams_per_pipe;
> +	config->slices_per_stream = slices_per_pipe /
> streams_per_pipe;
> +
> +	return true;
> +}
> +
> +
>  static bool is_pipe_dsc(struct intel_crtc *crtc, enum transcoder
> cpu_transcoder)
>  {
>  	struct intel_display *display = to_intel_display(crtc);
> diff --git a/drivers/gpu/drm/i915/display/intel_vdsc.h
> b/drivers/gpu/drm/i915/display/intel_vdsc.h
> index e61116d5297c8..aeb17670307b1 100644
> --- a/drivers/gpu/drm/i915/display/intel_vdsc.h
> +++ b/drivers/gpu/drm/i915/display/intel_vdsc.h
> @@ -13,11 +13,15 @@ struct drm_printer;
>  enum transcoder;
>  struct intel_crtc;
>  struct intel_crtc_state;
> +struct intel_display;
>  struct intel_dsc_slice_config;
>  struct intel_encoder;
>  
>  bool intel_dsc_source_support(const struct intel_crtc_state
> *crtc_state);
>  int intel_dsc_line_slice_count(const struct intel_dsc_slice_config
> *config);
> +bool intel_dsc_get_slice_config(struct intel_display *display,
> +				int num_joined_pipes, int
> slice_per_pipe,
> +				struct intel_dsc_slice_config
> *config);
>  void intel_uncompressed_joiner_enable(const struct intel_crtc_state
> *crtc_state);
>  void intel_dsc_enable(const struct intel_crtc_state *crtc_state);
>  void intel_dsc_disable(const struct intel_crtc_state *crtc_state);


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 12/50] drm/i915/dp: Account with pipe joiner max compressed BPP limit for DP-MST and eDP
  2025-11-27 17:49 ` [PATCH 12/50] drm/i915/dp: Account with pipe joiner max compressed BPP limit for DP-MST and eDP Imre Deak
@ 2025-12-10 14:29   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-10 14:29 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> The pipe joiner maximum compressed BPP must be limited based on the pipe
> joiner memory size and BW, do that for all DP outputs by adjusting the
> max compressed BPP value already in
> intel_dp_compute_config_link_bpp_limits() (which is used by all output
> types).
> 
> This way the BPP doesn't need to be adjusted in
> dsc_compute_compressed_bpp() (called for DP-SST after the above limits
> were computed already), so remove the adjustment from there.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 55be648283b19..def1f869febc2 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2245,19 +2245,12 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
>  {
>  	struct intel_display *display = to_intel_display(intel_dp);
>  	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
> -	const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
>  	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
> -	int dsc_joiner_max_bpp;
> -	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
>  	int link_bpp_x16;
>  	int bpp_x16;
>  	int ret;
>  
> -	dsc_joiner_max_bpp = get_max_compressed_bpp_with_joiner(display, adjusted_mode->crtc_clock,
> -								adjusted_mode->hdisplay,
> -								num_joined_pipes);
> -	max_bpp_x16 = min(fxp_q4_from_int(dsc_joiner_max_bpp), limits->link.max_bpp_x16);
> -
> +	max_bpp_x16 = limits->link.max_bpp_x16;
>  	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
>  
>  	/* Compressed BPP should be less than the Input DSC bpp */
> @@ -2613,6 +2606,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  		int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
>  		int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
>  		int throughput_max_bpp_x16;
> +		int joiner_max_bpp;
>  
>  		dsc_src_min_bpp = intel_dp_dsc_min_src_compressed_bpp();
>  		dsc_sink_min_bpp = intel_dp_dsc_sink_min_compressed_bpp(crtc_state);
> @@ -2620,11 +2614,17 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  		limits->link.min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
>  
>  		dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
> +		joiner_max_bpp =
> +			get_max_compressed_bpp_with_joiner(display,
> +							   adjusted_mode->crtc_clock,
> +							   adjusted_mode->hdisplay,
> +							   intel_crtc_num_joined_pipes(crtc_state));
>  		dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
>  									crtc_state,
>  									limits->pipe.max_bpp / 3);
>  		dsc_max_bpp = dsc_sink_max_bpp ?
>  			      min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp;
> +		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
>  
>  		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
>  

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 13/50] drm/i915/dp: Drop unused timeslots param from dsc_compute_link_config()
  2025-11-27 17:49 ` [PATCH 13/50] drm/i915/dp: Drop unused timeslots param from dsc_compute_link_config() Imre Deak
@ 2025-12-10 14:31   ` Luca Coelho
  0 siblings, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-10 14:31 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Drop the unused timeslots parameter from dsc_compute_link_config() and
> other functions calling it.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 18 +++++++-----------
>  1 file changed, 7 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index def1f869febc2..000fccc39a292 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2046,8 +2046,7 @@ static int dsc_compute_link_config(struct intel_dp *intel_dp,
>  				   struct intel_crtc_state *pipe_config,
>  				   struct drm_connector_state *conn_state,
>  				   const struct link_config_limits *limits,
> -				   int dsc_bpp_x16,
> -				   int timeslots)
> +				   int dsc_bpp_x16)
>  {
>  	const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
>  	int link_rate, lane_count;
> @@ -2240,8 +2239,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
>  				      struct intel_crtc_state *pipe_config,
>  				      struct drm_connector_state *conn_state,
>  				      const struct link_config_limits *limits,
> -				      int pipe_bpp,
> -				      int timeslots)
> +				      int pipe_bpp)
>  {
>  	struct intel_display *display = to_intel_display(intel_dp);
>  	const struct intel_connector *connector = to_intel_connector(conn_state->connector);
> @@ -2269,8 +2267,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
>  					      pipe_config,
>  					      conn_state,
>  					      limits,
> -					      bpp_x16,
> -					      timeslots);
> +					      bpp_x16);
>  		if (ret == 0) {
>  			pipe_config->dsc.compressed_bpp_x16 = bpp_x16;
>  			if (intel_dp->force_dsc_fractional_bpp_en &&
> @@ -2327,8 +2324,7 @@ int intel_dp_force_dsc_pipe_bpp(struct intel_dp *intel_dp,
>  static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  					 struct intel_crtc_state *pipe_config,
>  					 struct drm_connector_state *conn_state,
> -					 const struct link_config_limits *limits,
> -					 int timeslots)
> +					 const struct link_config_limits *limits)
>  {
>  	const struct intel_connector *connector =
>  		to_intel_connector(conn_state->connector);
> @@ -2340,7 +2336,7 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  
>  	if (forced_bpp) {
>  		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
> -						 limits, forced_bpp, timeslots);
> +						 limits, forced_bpp);
>  		if (ret == 0) {
>  			pipe_config->pipe_bpp = forced_bpp;
>  			return 0;
> @@ -2358,7 +2354,7 @@ static int intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  			continue;
>  
>  		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
> -						 limits, pipe_bpp, timeslots);
> +						 limits, pipe_bpp);
>  		if (ret == 0) {
>  			pipe_config->pipe_bpp = pipe_bpp;
>  			return 0;
> @@ -2469,7 +2465,7 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
>  							     conn_state, limits);
>  		else
>  			ret = intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
> -							    conn_state, limits, timeslots);
> +							    conn_state, limits);
>  		if (ret) {
>  			drm_dbg_kms(display->drm,
>  				    "No Valid pipe bpp for given mode ret = %d\n", ret);


Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 47/50] drm/i915/dsi: Use intel_dsc_get_slice_config()
  2025-11-27 17:50 ` [PATCH 47/50] drm/i915/dsi: Use intel_dsc_get_slice_config() Imre Deak
@ 2025-12-10 14:44   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-10 14:44 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Use intel_dsc_get_slice_config() for DSI to compute the slice
> configuration based on the slices-per-line sink capability, instead
> of
> open-coding the same.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_bios.c | 25 ++++++++++++---------
> --
>  1 file changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_bios.c
> b/drivers/gpu/drm/i915/display/intel_bios.c
> index 698e569a48e61..a7f02fa518d21 100644
> --- a/drivers/gpu/drm/i915/display/intel_bios.c
> +++ b/drivers/gpu/drm/i915/display/intel_bios.c
> @@ -3486,12 +3486,13 @@ bool intel_bios_is_dsi_present(struct
> intel_display *display,
>  	return false;
>  }
>  
> -static void fill_dsc(struct intel_crtc_state *crtc_state,
> +static bool fill_dsc(struct intel_crtc_state *crtc_state,
>  		     struct dsc_compression_parameters_entry *dsc,
>  		     int dsc_max_bpc)
>  {
>  	struct intel_display *display =
> to_intel_display(crtc_state);
>  	struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
> +	int slices_per_line;
>  	int bpc = 8;
>  
>  	vdsc_cfg->dsc_version_major = dsc->version_major;
> @@ -3520,24 +3521,24 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  	 *
>  	 * FIXME: split only when necessary
>  	 */
> -	crtc_state->dsc.slice_config.pipes_per_line = 1;
> -
>  	if (dsc->slices_per_line & BIT(2)) {
> -		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> -		crtc_state->dsc.slice_config.slices_per_stream = 2;
> +		slices_per_line = 4;
>  	} else if (dsc->slices_per_line & BIT(1)) {
> -		crtc_state->dsc.slice_config.streams_per_pipe = 2;
> -		crtc_state->dsc.slice_config.slices_per_stream = 1;
> +		slices_per_line = 2;
>  	} else {
>  		/* FIXME */
>  		if (!(dsc->slices_per_line & BIT(0)))
>  			drm_dbg_kms(display->drm,
>  				    "VBT: Unsupported DSC slice
> count for DSI\n");
>  
> -		crtc_state->dsc.slice_config.streams_per_pipe = 1;
> -		crtc_state->dsc.slice_config.slices_per_stream = 1;
> +		slices_per_line = 1;
>  	}
>  
> +	if (drm_WARN_ON(display->drm,
> +			!intel_dsc_get_slice_config(display, 1,
> slices_per_line,
> +						    &crtc_state-
> >dsc.slice_config)))
> +		return false;
> +
>  	if (crtc_state->hw.adjusted_mode.crtc_hdisplay %
>  	    intel_dsc_line_slice_count(&crtc_state-
> >dsc.slice_config) != 0)
>  		drm_dbg_kms(display->drm,
> @@ -3558,6 +3559,8 @@ static void fill_dsc(struct intel_crtc_state
> *crtc_state,
>  	vdsc_cfg->block_pred_enable = dsc->block_prediction_enable;
>  
>  	vdsc_cfg->slice_height = dsc->slice_height;
> +
> +	return true;
>  }
>  
>  /* FIXME: initially DSI specific */
> @@ -3578,9 +3581,7 @@ bool intel_bios_get_dsc_params(struct
> intel_encoder *encoder,
>  			if (!devdata->dsc)
>  				return false;
>  
> -			fill_dsc(crtc_state, devdata->dsc,
> dsc_max_bpc);
> -
> -			return true;
> +			return fill_dsc(crtc_state, devdata->dsc,
> dsc_max_bpc);
>  		}
>  	}
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 48/50] drm/i915/dp: Unify DP and eDP slice count computation
  2025-11-27 17:50 ` [PATCH 48/50] drm/i915/dp: Unify DP and eDP slice count computation Imre Deak
@ 2025-12-11  6:48   ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-11  6:48 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Unify the DP and eDP slices-per-line computation. Atm eDP simply
> returns
> the maximum slices-per-line value supported by the sink, but using
> the
> same helper function for both cases still makes sense, since a
> follow-up
> change will compute the detailed slice config for both cases.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 50 ++++++++++++-----------
> --
>  1 file changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 461f80bd54cbf..0db401ec0156f 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -948,11 +948,20 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  					int mode_clock, int
> mode_hdisplay)
>  {
>  	struct intel_display *display = to_intel_display(connector);
> +	bool is_edp =
> +		connector->base.connector_type ==
> DRM_MODE_CONNECTOR_eDP;
>  	int min_slice_count;
>  	int max_slice_width;
>  	int tp_rgb_yuv444;
>  	int tp_yuv422_420;
>  
> +	/*
> +	 * TODO: allow using less than the maximum number of slices
> +	 * supported by the eDP sink, to allow using fewer DSC
> engines.
> +	 */
> +	if (is_edp)
> +		return drm_dp_dsc_sink_max_slice_count(connector-
> >dp.dsc_dpcd, true);
> +
>  	/*
>  	 * TODO: Use the throughput value specific to the actual
> RGB/YUV
>  	 * format of the output.
> @@ -1016,8 +1025,10 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  	struct intel_display *display = to_intel_display(connector);
>  	int min_slice_count =
>  		intel_dp_dsc_min_slice_count(connector, mode_clock,
> mode_hdisplay);
> +	bool is_edp =
> +		connector->base.connector_type ==
> DRM_MODE_CONNECTOR_eDP;
>  	u32 sink_slice_count_mask =
> -		drm_dp_dsc_sink_slice_count_mask(connector-
> >dp.dsc_dpcd, false);
> +		drm_dp_dsc_sink_slice_count_mask(connector-
> >dp.dsc_dpcd, is_edp);
>  	int slices_per_pipe;
>  
>  	/*
> @@ -1470,9 +1481,13 @@ intel_dp_mode_valid(struct drm_connector
> *_connector,
>  		if (intel_dp_is_edp(intel_dp)) {
>  			dsc_max_compressed_bpp =
>  				drm_edp_dsc_sink_output_bpp(connecto
> r->dp.dsc_dpcd) >> 4;
> +
>  			dsc_slice_count =
> -
> 				drm_dp_dsc_sink_max_slice_count(connector->dp.dsc_dpcd,
> -
> 								true);
> +				intel_dp_dsc_get_slice_count(connect
> or,
> +							    
> target_clock,
> +							     mode-
> >hdisplay,
> +							    
> num_joined_pipes);
> +
>  			dsc = dsc_max_compressed_bpp &&
> dsc_slice_count;
>  		} else if (drm_dp_sink_supports_fec(connector-
> >dp.fec_capability)) {
>  			unsigned long bw_overhead_flags = 0;
> @@ -2380,28 +2395,13 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  	}
>  
>  	/* Calculate Slice count */
> -	if (intel_dp_is_edp(intel_dp)) {
> -		slices_per_line =
> -			drm_dp_dsc_sink_max_slice_count(connector-
> >dp.dsc_dpcd,
> -							true);
> -		if (!slices_per_line) {
> -			drm_dbg_kms(display->drm,
> -				    "Unsupported Slice Count %d\n",
> -				    slices_per_line);
> -			return -EINVAL;
> -		}
> -	} else {
> -		slices_per_line =
> -			intel_dp_dsc_get_slice_count(connector,
> -						     adjusted_mode-
> >crtc_clock,
> -						     adjusted_mode-
> >crtc_hdisplay,
> -						    
> num_joined_pipes);
> -		if (!slices_per_line) {
> -			drm_dbg_kms(display->drm,
> -				    "Compressed Slice Count not
> supported\n");
> -			return -EINVAL;
> -		}
> -	}
> +	slices_per_line = intel_dp_dsc_get_slice_count(connector,
> +						      
> adjusted_mode->crtc_clock,
> +						      
> adjusted_mode->crtc_hdisplay,
> +						      
> num_joined_pipes);
> +	if (!slices_per_line)
> +		return -EINVAL;
> +
>  	/*
>  	 * VDSC engine operates at 1 Pixel per clock, so if peak
> pixel rate
>  	 * is greater than the maximum Cdclock and if slice count is
> even


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config()
  2025-11-27 17:50 ` [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config() Imre Deak
@ 2025-12-11  6:55   ` Hogander, Jouni
  2025-12-11  9:52     ` Imre Deak
  2025-12-12 18:17   ` [PATCH v2 " Imre Deak
  1 sibling, 1 reply; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-11  6:55 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Add intel_dp_dsc_get_slice_config() to compute the detailed slice
> configuration and determine the slices-per-line value (returned by
> intel_dp_dsc_get_slice_count()) using this function.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 37 +++++++++++++++++++----
> --
>  1 file changed, 28 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 0db401ec0156f..003f4b18c1175 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -971,10 +971,10 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  	 */
>  	if (mode_clock > max(connector-
> >dp.dsc_branch_caps.overall_throughput.rgb_yuv444,
>  			     connector-
> >dp.dsc_branch_caps.overall_throughput.yuv422_420))
> -		return 0;
> +		return false;

Why you are changing these to return false? Otherwise looks ok.

BR,

Jouni Högander

>  
>  	if (mode_hdisplay > connector-
> >dp.dsc_branch_caps.max_line_width)
> -		return 0;
> +		return false;
>  
>  	/*
>  	 * TODO: Pass the total pixel rate of all the streams
> transferred to
> @@ -1009,7 +1009,7 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  		drm_dbg_kms(display->drm,
>  			    "Unsupported slice width %d by DP DSC
> Sink device\n",
>  			    max_slice_width);
> -		return 0;
> +		return false;
>  	}
>  	/* Also take into account max slice width */
>  	min_slice_count = max(min_slice_count,
> @@ -1018,9 +1018,11 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  	return min_slice_count;
>  }
>  
> -u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
> -				int mode_clock, int mode_hdisplay,
> -				int num_joined_pipes)
> +static bool
> +intel_dp_dsc_get_slice_config(const struct intel_connector
> *connector,
> +			      int mode_clock, int mode_hdisplay,
> +			      int num_joined_pipes,
> +			      struct intel_dsc_slice_config
> *config_ret)
>  {
>  	struct intel_display *display = to_intel_display(connector);
>  	int min_slice_count =
> @@ -1057,8 +1059,11 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		if (mode_hdisplay % slices_per_line)
>  			continue;
>  
> -		if (min_slice_count <= slices_per_line)
> -			return slices_per_line;
> +		if (min_slice_count <= slices_per_line) {
> +			*config_ret = config;
> +
> +			return true;
> +		}
>  	}
>  
>  	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in
> the mask. */
> @@ -1069,7 +1074,21 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		    min_slice_count,
>  		    (int)BITS_PER_TYPE(sink_slice_count_mask),
> &sink_slice_count_mask);
>  
> -	return 0;
> +	return false;
> +}
> +
> +u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
> +				int mode_clock, int mode_hdisplay,
> +				int num_joined_pipes)
> +{
> +	struct intel_dsc_slice_config config;
> +
> +	if (!intel_dp_dsc_get_slice_config(connector,
> +					   mode_clock,
> mode_hdisplay,
> +					   num_joined_pipes,
> &config))
> +		return 0;
> +
> +	return intel_dsc_line_slice_count(&config);
>  }
>  
>  static bool source_can_output(struct intel_dp *intel_dp,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config()
  2025-11-27 17:50 ` [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config() Imre Deak
@ 2025-12-11  6:59   ` Hogander, Jouni
  2025-12-11 10:23     ` Imre Deak
  2025-12-12 18:17   ` [PATCH v2 " Imre Deak
  1 sibling, 1 reply; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-11  6:59 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Simplify things by computing the detailed slice configuration using
> intel_dp_dsc_get_slice_config(), instead of open-coding the same.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 35 +++--------------------
> --
>  1 file changed, 3 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 003f4b18c1175..d41c75c6f7831 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2387,7 +2387,6 @@ int intel_dp_dsc_compute_config(struct intel_dp
> *intel_dp,
>  		&pipe_config->hw.adjusted_mode;
>  	int num_joined_pipes =
> intel_crtc_num_joined_pipes(pipe_config);
>  	bool is_mst = intel_crtc_has_type(pipe_config,
> INTEL_OUTPUT_DP_MST);
> -	int slices_per_line;
>  	int ret;
>  
>  	/*
> @@ -2413,39 +2412,11 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  		}
>  	}
>  
> -	/* Calculate Slice count */
> -	slices_per_line = intel_dp_dsc_get_slice_count(connector,
> -						      
> adjusted_mode->crtc_clock,
> -						      
> adjusted_mode->crtc_hdisplay,
> -						      
> num_joined_pipes);
> -	if (!slices_per_line)
> +	if (!intel_dp_dsc_get_slice_config(connector, adjusted_mode-
> >crtc_clock,
> +					   adjusted_mode-
> >crtc_hdisplay, num_joined_pipes,
> +					   &pipe_config-
> >dsc.slice_config))
>  		return -EINVAL;
>  
> -	/*
> -	 * VDSC engine operates at 1 Pixel per clock, so if peak
> pixel rate
> -	 * is greater than the maximum Cdclock and if slice count is
> even
> -	 * then we need to use 2 VDSC instances.
> -	 * In case of Ultrajoiner along with 12 slices we need to
> use 3
> -	 * VDSC instances.
> -	 */

I'll guess you have considered this comment being useless? Anyways,
patch looks ok:

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> -	pipe_config->dsc.slice_config.pipes_per_line =
> num_joined_pipes;
> -
> -	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
> -	    slices_per_line == 12)
> -		pipe_config->dsc.slice_config.streams_per_pipe = 3;
> -	else if (pipe_config->joiner_pipes || slices_per_line > 1)
> -		pipe_config->dsc.slice_config.streams_per_pipe = 2;
> -	else
> -		pipe_config->dsc.slice_config.streams_per_pipe = 1;
> -
> -	pipe_config->dsc.slice_config.slices_per_stream =
> -		slices_per_line /
> -		pipe_config->dsc.slice_config.pipes_per_line /
> -		pipe_config->dsc.slice_config.streams_per_pipe;
> -
> -	drm_WARN_ON(display->drm,
> -		    intel_dsc_line_slice_count(&pipe_config-
> >dsc.slice_config) != slices_per_line);
> -
>  	ret = intel_dp_dsc_compute_params(connector, pipe_config);
>  	if (ret < 0) {
>  		drm_dbg_kms(display->drm,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 19/50] drm/i915/dp: Fail state computation for invalid DSC source input BPP values
  2025-11-27 17:49 ` [PATCH 19/50] drm/i915/dp: Fail state computation for invalid DSC source input BPP values Imre Deak
@ 2025-12-11  8:29   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-11  8:29 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> There is no reason to accept an invalid minimum/maximum DSC source
> input
> BPP value (i.e a minimum DSC input BPP value above the maximum pipe
> BPP
> or a maximum DSC input BPP value below the minimum pipe BPP value),
> fail
> the state computation in these cases.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++++-----
> --
>  1 file changed, 21 insertions(+), 7 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index db7e49c17ca8d..1ef64b90492ea 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2669,16 +2669,30 @@
> intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  	return true;
>  }
>  
> -static void
> -intel_dp_dsc_compute_pipe_bpp_limits(struct intel_dp *intel_dp,
> +static bool
> +intel_dp_dsc_compute_pipe_bpp_limits(struct intel_connector
> *connector,
>  				     struct link_config_limits
> *limits)
>  {
> -	struct intel_display *display = to_intel_display(intel_dp);
> +	struct intel_display *display = to_intel_display(connector);
> +	const struct link_config_limits orig_limits = *limits;
>  	int dsc_min_bpc = intel_dp_dsc_min_src_input_bpc();
>  	int dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
>  
> -	limits->pipe.max_bpp = clamp(limits->pipe.max_bpp,
> dsc_min_bpc * 3, dsc_max_bpc * 3);
> -	limits->pipe.min_bpp = clamp(limits->pipe.min_bpp,
> dsc_min_bpc * 3, dsc_max_bpc * 3);
> +	limits->pipe.min_bpp = max(limits->pipe.min_bpp, dsc_min_bpc
> * 3);
> +	limits->pipe.max_bpp = min(limits->pipe.max_bpp, dsc_max_bpc
> * 3);
> +
> +	if (limits->pipe.min_bpp <= 0 ||
> +	    limits->pipe.min_bpp > limits->pipe.max_bpp) {
> +		drm_dbg_kms(display->drm,
> +			    "[CONNECTOR:%d:%s] Invalid DSC src/sink
> input BPP (src:%d-%d pipe:%d-%d)\n",
> +			    connector->base.base.id, connector-
> >base.name,
> +			    dsc_min_bpc * 3, dsc_max_bpc * 3,
> +			    orig_limits.pipe.min_bpp,
> orig_limits.pipe.max_bpp);
> +
> +		return false;
> +	}
> +
> +	return true;
>  }
>  
>  bool
> @@ -2718,8 +2732,8 @@ intel_dp_compute_config_limits(struct intel_dp
> *intel_dp,
>  							respect_down
> stream_limits);
>  	}
>  
> -	if (dsc)
> -		intel_dp_dsc_compute_pipe_bpp_limits(intel_dp,
> limits);
> +	if (dsc && !intel_dp_dsc_compute_pipe_bpp_limits(connector,
> limits))
> +		return false;
>  
>  	if (is_mst || intel_dp->use_max_params) {
>  		/*


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 20/50] drm/i915/dp: Align min/max DSC input BPPs to sink caps
  2025-11-27 17:49 ` [PATCH 20/50] drm/i915/dp: Align min/max DSC input BPPs to sink caps Imre Deak
@ 2025-12-11  8:51   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-11  8:51 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Align the minimum/maximum DSC input BPPs to the corresponding sink
> DSC
> input BPP capability limits already when computing the BPP limits.
> This
> alignment is also performed later during state computation, however
> there is no reason to initialize the limits to an unaligned/incorrect
> value.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 25
> +++++++++++++++++++++++--
>  1 file changed, 23 insertions(+), 2 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 1ef64b90492ea..e7a42c9e4fef1 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1904,6 +1904,23 @@ int intel_dp_dsc_max_src_input_bpc(struct
> intel_display *display)
>  	return intel_dp_dsc_min_src_input_bpc();
>  }
>  
> +static int align_min_sink_dsc_input_bpp(const struct intel_connector
> *connector,
> +					int min_pipe_bpp)
> +{
> +	u8 dsc_bpc[3];
> +	int num_bpc;
> +	int i;
> +
> +	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector-
> >dp.dsc_dpcd,
> +						       dsc_bpc);
> +	for (i = num_bpc - 1; i >= 0; i--) {
> +		if (dsc_bpc[i] * 3 >= min_pipe_bpp)
> +			return dsc_bpc[i] * 3;
> +	}
> +
> +	return 0;
> +}
> +
>  static int align_max_sink_dsc_input_bpp(const struct intel_connector
> *connector,
>  					int max_pipe_bpp)
>  {
> @@ -2679,15 +2696,19 @@ intel_dp_dsc_compute_pipe_bpp_limits(struct
> intel_connector *connector,
>  	int dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
>  
>  	limits->pipe.min_bpp = max(limits->pipe.min_bpp, dsc_min_bpc
> * 3);
> +	limits->pipe.min_bpp =
> align_min_sink_dsc_input_bpp(connector, limits->pipe.min_bpp);
> +
>  	limits->pipe.max_bpp = min(limits->pipe.max_bpp, dsc_max_bpc
> * 3);
> +	limits->pipe.max_bpp =
> align_max_sink_dsc_input_bpp(connector, limits->pipe.max_bpp);
>  
>  	if (limits->pipe.min_bpp <= 0 ||
>  	    limits->pipe.min_bpp > limits->pipe.max_bpp) {
>  		drm_dbg_kms(display->drm,
> -			    "[CONNECTOR:%d:%s] Invalid DSC src/sink
> input BPP (src:%d-%d pipe:%d-%d)\n",
> +			    "[CONNECTOR:%d:%s] Invalid DSC src/sink
> input BPP (src:%d-%d pipe:%d-%d sink-align:%d-%d)\n",
>  			    connector->base.base.id, connector-
> >base.name,
>  			    dsc_min_bpc * 3, dsc_max_bpc * 3,
> -			    orig_limits.pipe.min_bpp,
> orig_limits.pipe.max_bpp);
> +			    orig_limits.pipe.min_bpp,
> orig_limits.pipe.max_bpp,
> +			    limits->pipe.min_bpp, limits-
> >pipe.max_bpp);
>  
>  		return false;
>  	}


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config()
  2025-12-11  6:55   ` Hogander, Jouni
@ 2025-12-11  9:52     ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-11  9:52 UTC (permalink / raw)
  To: Jouni Hogander
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Thu, Dec 11, 2025 at 08:55:03AM +0200, Jouni Hogander wrote:
> On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> > Add intel_dp_dsc_get_slice_config() to compute the detailed slice
> > configuration and determine the slices-per-line value (returned by
> > intel_dp_dsc_get_slice_count()) using this function.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 37 +++++++++++++++++++----
> > --
> >  1 file changed, 28 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 0db401ec0156f..003f4b18c1175 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -971,10 +971,10 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
> >  	 */
> >  	if (mode_clock > max(connector->dp.dsc_branch_caps.overall_throughput.rgb_yuv444,
> >  			     connector->dp.dsc_branch_caps.overall_throughput.yuv422_420))
> > -		return 0;
> > +		return false;
> 
> Why you are changing these to return false? Otherwise looks ok.

Uh, thanks for catching this. It's a rebase fail on my part after
reordering this patch wrt. "Factor out intel_dp_dsc_min_slice_count()"
in the patchset. What I meant was to change the return value to a bool
only for intel_dp_dsc_get_slice_count()/intel_dp_dsc_get_slice_config(),
will send an updated version.

> 
> BR,
> 
> Jouni Högander
> 
> >  
> >  	if (mode_hdisplay > connector->dp.dsc_branch_caps.max_line_width)
> > -		return 0;
> > +		return false;
> >  
> >  	/*
> >  	 * TODO: Pass the total pixel rate of all the streams transferred to
> > @@ -1009,7 +1009,7 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
> >  		drm_dbg_kms(display->drm,
> >  			    "Unsupported slice width %d by DP DSC Sink device\n",
> >  			    max_slice_width);
> > -		return 0;
> > +		return false;
> >  	}
> >  	/* Also take into account max slice width */
> >  	min_slice_count = max(min_slice_count,
> > @@ -1018,9 +1018,11 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
> >  	return min_slice_count;
> >  }
> >  
> > -u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> > -				int mode_clock, int mode_hdisplay,
> > -				int num_joined_pipes)
> > +static bool
> > +intel_dp_dsc_get_slice_config(const struct intel_connector *connector,
> > +			      int mode_clock, int mode_hdisplay,
> > +			      int num_joined_pipes,
> > +			      struct intel_dsc_slice_config *config_ret)
> >  {
> >  	struct intel_display *display = to_intel_display(connector);
> >  	int min_slice_count =
> > @@ -1057,8 +1059,11 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> >  		if (mode_hdisplay % slices_per_line)
> >  			continue;
> >  
> > -		if (min_slice_count <= slices_per_line)
> > -			return slices_per_line;
> > +		if (min_slice_count <= slices_per_line) {
> > +			*config_ret = config;
> > +
> > +			return true;
> > +		}
> >  	}
> >  
> >  	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
> > @@ -1069,7 +1074,21 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> >  		    min_slice_count,
> >  		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
> >  
> > -	return 0;
> > +	return false;
> > +}
> > +
> > +u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
> > +				int mode_clock, int mode_hdisplay,
> > +				int num_joined_pipes)
> > +{
> > +	struct intel_dsc_slice_config config;
> > +
> > +	if (!intel_dp_dsc_get_slice_config(connector,
> > +					   mode_clock, mode_hdisplay,
> > +					   num_joined_pipes, &config))
> > +		return 0;
> > +
> > +	return intel_dsc_line_slice_count(&config);
> >  }
> >  
> >  static bool source_can_output(struct intel_dp *intel_dp,

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config()
  2025-12-11  6:59   ` Hogander, Jouni
@ 2025-12-11 10:23     ` Imre Deak
  2025-12-12 18:03       ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-11 10:23 UTC (permalink / raw)
  To: Jouni Hogander
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Thu, Dec 11, 2025 at 08:59:25AM +0200, Jouni Hogander wrote:
> On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> > Simplify things by computing the detailed slice configuration using
> > intel_dp_dsc_get_slice_config(), instead of open-coding the same.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 35 +++--------------------
> > --
> >  1 file changed, 3 insertions(+), 32 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 003f4b18c1175..d41c75c6f7831 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -2387,7 +2387,6 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> >  		&pipe_config->hw.adjusted_mode;
> >  	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
> >  	bool is_mst = intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST);
> > -	int slices_per_line;
> >  	int ret;
> >  
> >  	/*
> > @@ -2413,39 +2412,11 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> >  		}
> >  	}
> >  
> > -	/* Calculate Slice count */
> > -	slices_per_line = intel_dp_dsc_get_slice_count(connector,
> > -						       adjusted_mode->crtc_clock,
> > -						       adjusted_mode->crtc_hdisplay,
> > -						       num_joined_pipes);
> > -	if (!slices_per_line)
> > +	if (!intel_dp_dsc_get_slice_config(connector, adjusted_mode->crtc_clock,
> > +					   adjusted_mode->crtc_hdisplay, num_joined_pipes,
> > +					   &pipe_config->dsc.slice_config))
> >  		return -EINVAL;
> >  
> > -	/*
> > -	 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
> > -	 * is greater than the maximum Cdclock and if slice count is even
> > -	 * then we need to use 2 VDSC instances.
> > -	 * In case of Ultrajoiner along with 12 slices we need to use 3
> > -	 * VDSC instances.
> > -	 */
> 
> I'll guess you have considered this comment being useless?

A stricter condition between pixel clock (mode clock) vs. CD clock is
described already in intel_dp_dsc_min_slice_count(). I can further
clarify the comment in that function, mentioning also the above VDSC
engine 1 ppc limit as a reason for the condition there.

The 12 slices-per-line / 3 VDSC streams-per-pipe logic is already
described in intel_dsc_get_slice_config().

> Anyways, patch looks ok:
> 
> Reviewed-by: Jouni Högander <jouni.hogander@intel.com>
> 
> > -	pipe_config->dsc.slice_config.pipes_per_line = num_joined_pipes;
> > -
> > -	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
> > -	    slices_per_line == 12)
> > -		pipe_config->dsc.slice_config.streams_per_pipe = 3;
> > -	else if (pipe_config->joiner_pipes || slices_per_line > 1)
> > -		pipe_config->dsc.slice_config.streams_per_pipe = 2;
> > -	else
> > -		pipe_config->dsc.slice_config.streams_per_pipe = 1;
> > -
> > -	pipe_config->dsc.slice_config.slices_per_stream =
> > -		slices_per_line /
> > -		pipe_config->dsc.slice_config.pipes_per_line /
> > -		pipe_config->dsc.slice_config.streams_per_pipe;
> > -
> > -	drm_WARN_ON(display->drm,
> > -		    intel_dsc_line_slice_count(&pipe_config->dsc.slice_config) != slices_per_line);
> > -
> >  	ret = intel_dp_dsc_compute_params(connector, pipe_config);
> >  	if (ret < 0) {
> >  		drm_dbg_kms(display->drm,
> 

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits
  2025-11-27 17:49 ` [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits Imre Deak
@ 2025-12-12  9:17   ` Govindapillai, Vinod
  2025-12-12 11:09     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12  9:17 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Align the minimum/maximum DSC compressed BPPs to the corresponding
> source compressed BPP limits already when computing the BPP limits.
> This
> alignment is also performed later during state computation, however
> there is no reason to initialize the limits to an unaligned/incorrect
> value.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 57
> +++++++++++++++++++++++++
>  1 file changed, 57 insertions(+)
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index e7a42c9e4fef1..801e8fd6b229e 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -867,6 +867,20 @@ small_joiner_ram_size_bits(struct intel_display
> *display)
>  		return 6144 * 8;
>  }
>  
> +static int align_min_vesa_compressed_bpp_x16(int min_link_bpp_x16)
> +{
> +	int i;
> +
> +	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
> +		int vesa_bpp_x16 =
> fxp_q4_from_int(valid_dsc_bpp[i]);
> +
> +		if (vesa_bpp_x16 >= min_link_bpp_x16)
> +			return vesa_bpp_x16;
> +	}
> +
> +	return 0;
> +}
> +
>  static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
>  {
>  	int i;
> @@ -2261,6 +2275,40 @@ bool intel_dp_dsc_valid_compressed_bpp(struct
> intel_dp *intel_dp, int bpp_x16)
>  	return align_max_vesa_compressed_bpp_x16(bpp_x16) ==
> bpp_x16;
>  }
>  
> +static int align_min_compressed_bpp_x16(const struct intel_connector
> *connector, int min_bpp_x16)
> +{
> +	struct intel_display *display = to_intel_display(connector);
> +
> +	if (DISPLAY_VER(display) >= 13) {
> +		int bpp_step_x16 =
> intel_dp_dsc_bpp_step_x16(connector);
> +
> +		drm_WARN_ON(display->drm,
> !is_power_of_2(bpp_step_x16));
> +
> +		return round_up(min_bpp_x16, bpp_step_x16);
> +	} else {
> +		return
> align_min_vesa_compressed_bpp_x16(min_bpp_x16);
> +	}
> +}
> +
> +static int align_max_compressed_bpp_x16(const struct intel_connector
> *connector,
> +					enum intel_output_format
> output_format,
> +					int pipe_bpp, int
> max_bpp_x16)
> +{
> +	struct intel_display *display = to_intel_display(connector);
> +	int link_bpp_x16 =
> intel_dp_output_format_link_bpp_x16(output_format, pipe_bpp);
> +	int bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
> +
> +	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
> +
> +	if (DISPLAY_VER(display) >= 13) {
> +		drm_WARN_ON(display->drm,
> !is_power_of_2(bpp_step_x16));
> +
> +		return round_down(max_bpp_x16, bpp_step_x16);
> +	} else {
> +		return
> align_max_vesa_compressed_bpp_x16(max_bpp_x16);
> +	}

well.. return align_max_vesa_compressed_bpp_x16(...) could be placed
without "else loop" as well here and above. 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> +}
> +
>  /*
>   * Find the max compressed BPP we can find a link configuration for.
> The BPPs to
>   * try depend on the source (platform) and sink.
> @@ -2639,6 +2687,9 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_dp *intel_dp,
>  		dsc_min_bpp = max(dsc_src_min_bpp,
> dsc_sink_min_bpp);
>  		limits->link.min_bpp_x16 =
> fxp_q4_from_int(dsc_min_bpp);
>  
> +		limits->link.min_bpp_x16 =
> +			align_min_compressed_bpp_x16(connector,
> limits->link.min_bpp_x16);
> +
>  		dsc_src_max_bpp =
> dsc_src_max_compressed_bpp(intel_dp);
>  		joiner_max_bpp =
>  			get_max_compressed_bpp_with_joiner(display,
> @@ -2663,6 +2714,12 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_dp *intel_dp,
>  				    connector->base.base.id,
> connector->base.name,
>  				    FXP_Q4_ARGS(max_link_bpp_x16));
>  		}
> +
> +		max_link_bpp_x16 =
> +			align_max_compressed_bpp_x16(connector,
> +						     crtc_state-
> >output_format,
> +						     limits-
> >pipe.max_bpp,
> +						    
> max_link_bpp_x16);
>  	}
>  
>  	limits->link.max_bpp_x16 = max_link_bpp_x16;


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 22/50] drm/i915/dp: Drop intel_dp parameter from intel_dp_compute_config_link_bpp_limits()
  2025-11-27 17:49 ` [PATCH 22/50] drm/i915/dp: Drop intel_dp parameter from intel_dp_compute_config_link_bpp_limits() Imre Deak
@ 2025-12-12  9:23   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12  9:23 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> The intel_dp pointer can be deducted from the connector pointer, so
> it's
> enough to pass only connector to
> intel_dp_compute_config_link_bpp_limits(), do so.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 801e8fd6b229e..5ad71e697e585 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2653,13 +2653,13 @@ dsc_throughput_quirk_max_bpp_x16(const struct
> intel_connector *connector,
>   * range, crtc_state and dsc mode. Return true on success.
>   */
>  static bool
> -intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
> -					const struct intel_connector
> *connector,
> +intel_dp_compute_config_link_bpp_limits(struct intel_connector
> *connector,
>  					const struct
> intel_crtc_state *crtc_state,
>  					bool dsc,
>  					struct link_config_limits
> *limits)
>  {
> -	struct intel_display *display = to_intel_display(intel_dp);
> +	struct intel_display *display = to_intel_display(connector);
> +	struct intel_dp *intel_dp = intel_attached_dp(connector);
>  	const struct drm_display_mode *adjusted_mode =
>  		&crtc_state->hw.adjusted_mode;
>  	const struct intel_crtc *crtc = to_intel_crtc(crtc_state-
> >uapi.crtc);
> @@ -2831,8 +2831,7 @@ intel_dp_compute_config_limits(struct intel_dp
> *intel_dp,
>  
>  	intel_dp_test_compute_config(intel_dp, crtc_state, limits);
>  
> -	return intel_dp_compute_config_link_bpp_limits(intel_dp,
> -						       connector,
> +	return intel_dp_compute_config_link_bpp_limits(connector,
>  						       crtc_state,
>  						       dsc,
>  						       limits);


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 23/50] drm/i915/dp: Pass intel_output_format to intel_dp_dsc_sink_{min_max}_compressed_bpp()
  2025-11-27 17:49 ` [PATCH 23/50] drm/i915/dp: Pass intel_output_format to intel_dp_dsc_sink_{min_max}_compressed_bpp() Imre Deak
@ 2025-12-12  9:27   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12  9:27 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Prepare for follow-up changes also calling
> intel_dp_dsc_min_sink_compressed_bpp() /
> intel_dp_dsc_max_sink_compressed_bpp_x16()
> without an intel_crtc_state.
> 
> While at it remove the stale function declarations from the header
> file.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 24 ++++++++++++-----------
> -
>  drivers/gpu/drm/i915/display/intel_dp.h |  4 ----
>  2 files changed, 12 insertions(+), 16 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 5ad71e697e585..54a037fcf5111 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2156,7 +2156,7 @@ static int dsc_compute_link_config(struct
> intel_dp *intel_dp,
>  
>  static
>  u16 intel_dp_dsc_max_sink_compressed_bppx16(const struct
> intel_connector *connector,
> -					    const struct
> intel_crtc_state *pipe_config,
> +					    enum intel_output_format
> output_format,
>  					    int bpc)
>  {
>  	u16 max_bppx16 = drm_edp_dsc_sink_output_bpp(connector-
> >dp.dsc_dpcd);
> @@ -2167,43 +2167,43 @@ u16
> intel_dp_dsc_max_sink_compressed_bppx16(const struct intel_connector
> *connec
>  	 * If support not given in DPCD 67h, 68h use the Maximum
> Allowed bit rate
>  	 * values as given in spec Table 2-157 DP v2.0
>  	 */
> -	switch (pipe_config->output_format) {
> +	switch (output_format) {
>  	case INTEL_OUTPUT_FORMAT_RGB:
>  	case INTEL_OUTPUT_FORMAT_YCBCR444:
>  		return (3 * bpc) << 4;
>  	case INTEL_OUTPUT_FORMAT_YCBCR420:
>  		return (3 * (bpc / 2)) << 4;
>  	default:
> -		MISSING_CASE(pipe_config->output_format);
> +		MISSING_CASE(output_format);
>  		break;
>  	}
>  
>  	return 0;
>  }
>  
> -int intel_dp_dsc_sink_min_compressed_bpp(const struct
> intel_crtc_state *pipe_config)
> +static int intel_dp_dsc_sink_min_compressed_bpp(enum
> intel_output_format output_format)
>  {
>  	/* From Mandatory bit rate range Support Table 2-157 (DP
> v2.0) */
> -	switch (pipe_config->output_format) {
> +	switch (output_format) {
>  	case INTEL_OUTPUT_FORMAT_RGB:
>  	case INTEL_OUTPUT_FORMAT_YCBCR444:
>  		return 8;
>  	case INTEL_OUTPUT_FORMAT_YCBCR420:
>  		return 6;
>  	default:
> -		MISSING_CASE(pipe_config->output_format);
> +		MISSING_CASE(output_format);
>  		break;
>  	}
>  
>  	return 0;
>  }
>  
> -int intel_dp_dsc_sink_max_compressed_bpp(const struct
> intel_connector *connector,
> -					 const struct
> intel_crtc_state *pipe_config,
> -					 int bpc)
> +static int intel_dp_dsc_sink_max_compressed_bpp(const struct
> intel_connector *connector,
> +						enum
> intel_output_format output_format,
> +						int bpc)
>  {
>  	return intel_dp_dsc_max_sink_compressed_bppx16(connector,
> -						       pipe_config,
> bpc) >> 4;
> +						      
> output_format, bpc) >> 4;
>  }
>  
>  int intel_dp_dsc_min_src_compressed_bpp(void)
> @@ -2683,7 +2683,7 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_connector *connector,
>  		int joiner_max_bpp;
>  
>  		dsc_src_min_bpp =
> intel_dp_dsc_min_src_compressed_bpp();
> -		dsc_sink_min_bpp =
> intel_dp_dsc_sink_min_compressed_bpp(crtc_state);
> +		dsc_sink_min_bpp =
> intel_dp_dsc_sink_min_compressed_bpp(crtc_state->output_format);
>  		dsc_min_bpp = max(dsc_src_min_bpp,
> dsc_sink_min_bpp);
>  		limits->link.min_bpp_x16 =
> fxp_q4_from_int(dsc_min_bpp);
>  
> @@ -2697,7 +2697,7 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_connector *connector,
>  							  
> adjusted_mode->hdisplay,
>  							  
> intel_crtc_num_joined_pipes(crtc_state));
>  		dsc_sink_max_bpp =
> intel_dp_dsc_sink_max_compressed_bpp(connector,
> -
> 									crtc_state,
> +								
> 	crtc_state->output_format,
>  								
> 	limits->pipe.max_bpp / 3);
>  		dsc_max_bpp = min(dsc_sink_max_bpp,
> dsc_src_max_bpp);
>  		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 30eebb8cad6d2..489b8c945da39 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -150,10 +150,6 @@ u16 intel_dp_dsc_get_max_compressed_bpp(struct
> intel_display *display,
>  					enum intel_output_format
> output_format,
>  					u32 pipe_bpp,
>  					u32 timeslots);
> -int intel_dp_dsc_sink_min_compressed_bpp(const struct
> intel_crtc_state *pipe_config);
> -int intel_dp_dsc_sink_max_compressed_bpp(const struct
> intel_connector *connector,
> -					 const struct
> intel_crtc_state *pipe_config,
> -					 int bpc);
>  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp,
> int bpp_x16);
>  u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
>  				int mode_clock, int mode_hdisplay,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 24/50] drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16()
  2025-11-27 17:49 ` [PATCH 24/50] drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16() Imre Deak
@ 2025-12-12  9:31   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12  9:31 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Prepare for follow-up changes using
> dsc_throughput_quirk_max_bpp_x16()
> without an intel_crtc_state pointer.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 54a037fcf5111..193d9c2079347 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2616,11 +2616,8 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  
>  static int
>  dsc_throughput_quirk_max_bpp_x16(const struct intel_connector
> *connector,
> -				 const struct intel_crtc_state
> *crtc_state)
> +				 int mode_clock)
>  {
> -	const struct drm_display_mode *adjusted_mode =
> -		&crtc_state->hw.adjusted_mode;
> -
>  	if (!connector->dp.dsc_throughput_quirk)
>  		return INT_MAX;
>  
> @@ -2640,7 +2637,7 @@ dsc_throughput_quirk_max_bpp_x16(const struct
> intel_connector *connector,
>  	 * smaller than the YUV422/420 value, but let's not depend
> on this
>  	 * assumption.
>  	 */
> -	if (adjusted_mode->crtc_clock <
> +	if (mode_clock <
>  	    min(connector-
> >dp.dsc_branch_caps.overall_throughput.rgb_yuv444,
>  		connector-
> >dp.dsc_branch_caps.overall_throughput.yuv422_420) / 2)
>  		return INT_MAX;
> @@ -2704,7 +2701,8 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_connector *connector,
>  
>  		max_link_bpp_x16 = min(max_link_bpp_x16,
> fxp_q4_from_int(dsc_max_bpp));
>  
> -		throughput_max_bpp_x16 =
> dsc_throughput_quirk_max_bpp_x16(connector, crtc_state);
> +		throughput_max_bpp_x16 =
> +			dsc_throughput_quirk_max_bpp_x16(connector,
> adjusted_mode->crtc_clock);
>  		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
>  			max_link_bpp_x16 = throughput_max_bpp_x16;
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16()
  2025-11-27 17:49 ` [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16() Imre Deak
@ 2025-12-12  9:39   ` Govindapillai, Vinod
  2025-12-12 11:01     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12  9:39 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out compute_min_compressed_bpp_x16() also used during mode
> validation in a follow-up change.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 26 +++++++++++++++++------
> --
>  1 file changed, 18 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 193d9c2079347..2a5f5f1b4b128 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2645,6 +2645,23 @@ dsc_throughput_quirk_max_bpp_x16(const struct
> intel_connector *connector,
>  	return fxp_q4_from_int(12);
>  }
>  
> +static int compute_min_compressed_bpp_x16(struct intel_connector
> *connector,
> +					  enum intel_output_format
> output_format)
> +{

Could be "const struct intel_connector". align_min_compressed_bpp_x16()
also takes const intel_connector

with that,

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> +	int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
> +	int min_bpp_x16;
> +
> +	dsc_src_min_bpp = intel_dp_dsc_min_src_compressed_bpp();
> +	dsc_sink_min_bpp =
> intel_dp_dsc_sink_min_compressed_bpp(output_format);
> +	dsc_min_bpp = max(dsc_src_min_bpp, dsc_sink_min_bpp);
> +
> +	min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
> +
> +	min_bpp_x16 = align_min_compressed_bpp_x16(connector,
> min_bpp_x16);
> +
> +	return min_bpp_x16;
> +}
> +
>  /*
>   * Calculate the output link min, max bpp values in limits based on
> the pipe bpp
>   * range, crtc_state and dsc mode. Return true on success.
> @@ -2674,18 +2691,11 @@
> intel_dp_compute_config_link_bpp_limits(struct intel_connector
> *connector,
>  
>  		limits->link.min_bpp_x16 = fxp_q4_from_int(limits-
> >pipe.min_bpp);
>  	} else {
> -		int dsc_src_min_bpp, dsc_sink_min_bpp, dsc_min_bpp;
>  		int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
>  		int throughput_max_bpp_x16;
>  		int joiner_max_bpp;
> -
> -		dsc_src_min_bpp =
> intel_dp_dsc_min_src_compressed_bpp();
> -		dsc_sink_min_bpp =
> intel_dp_dsc_sink_min_compressed_bpp(crtc_state->output_format);
> -		dsc_min_bpp = max(dsc_src_min_bpp,
> dsc_sink_min_bpp);
> -		limits->link.min_bpp_x16 =
> fxp_q4_from_int(dsc_min_bpp);
> -
>  		limits->link.min_bpp_x16 =
> -			align_min_compressed_bpp_x16(connector,
> limits->link.min_bpp_x16);
> +			compute_min_compressed_bpp_x16(connector,
> crtc_state->output_format);
>  
>  		dsc_src_max_bpp =
> dsc_src_max_compressed_bpp(intel_dp);
>  		joiner_max_bpp =


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 26/50] drm/i915/dp: Factor out compute_max_compressed_bpp_x16()
  2025-11-27 17:49 ` [PATCH 26/50] drm/i915/dp: Factor out compute_max_compressed_bpp_x16() Imre Deak
@ 2025-12-12  9:50   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12  9:50 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out compute_max_compressed_bpp_x16() also used during mode
> validation in a follow-up change.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 82 +++++++++++++++--------
> --
>  1 file changed, 49 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 2a5f5f1b4b128..9deb99eda8813 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2662,6 +2662,48 @@ static int
> compute_min_compressed_bpp_x16(struct intel_connector *connector,
>  	return min_bpp_x16;
>  }
>  
> +static int compute_max_compressed_bpp_x16(struct intel_connector
> *connector,

const ?

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> +					  int mode_clock, int
> mode_hdisplay,
> +					  int num_joined_pipes,
> +					  enum intel_output_format
> output_format,
> +					  int pipe_max_bpp, int
> max_link_bpp_x16)
> +{
> +	struct intel_display *display = to_intel_display(connector);
> +	struct intel_dp *intel_dp = intel_attached_dp(connector);
> +	int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
> +	int throughput_max_bpp_x16;
> +	int joiner_max_bpp;
> +
> +	dsc_src_max_bpp = dsc_src_max_compressed_bpp(intel_dp);
> +	joiner_max_bpp = get_max_compressed_bpp_with_joiner(display,
> +							   
> mode_clock,
> +							   
> mode_hdisplay,
> +							   
> num_joined_pipes);
> +	dsc_sink_max_bpp =
> intel_dp_dsc_sink_max_compressed_bpp(connector,
> +								outp
> ut_format,
> +								pipe
> _max_bpp / 3);
> +	dsc_max_bpp = min(dsc_sink_max_bpp, dsc_src_max_bpp);
> +	dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
> +
> +	max_link_bpp_x16 = min(max_link_bpp_x16,
> fxp_q4_from_int(dsc_max_bpp));
> +
> +	throughput_max_bpp_x16 =
> dsc_throughput_quirk_max_bpp_x16(connector,
> +								 
> mode_clock);
> +	if (throughput_max_bpp_x16 < max_link_bpp_x16) {
> +		max_link_bpp_x16 = throughput_max_bpp_x16;
> +
> +		drm_dbg_kms(display->drm,
> +			    "[CONNECTOR:%d:%s] Decreasing link max
> bpp to " FXP_Q4_FMT " due to DSC throughput quirk\n",
> +			    connector->base.base.id, connector-
> >base.name,
> +			    FXP_Q4_ARGS(max_link_bpp_x16));
> +	}
> +
> +	max_link_bpp_x16 = align_max_compressed_bpp_x16(connector,
> output_format,
> +							pipe_max_bpp
> , max_link_bpp_x16);
> +
> +	return max_link_bpp_x16;
> +}
> +
>  /*
>   * Calculate the output link min, max bpp values in limits based on
> the pipe bpp
>   * range, crtc_state and dsc mode. Return true on success.
> @@ -2691,43 +2733,17 @@
> intel_dp_compute_config_link_bpp_limits(struct intel_connector
> *connector,
>  
>  		limits->link.min_bpp_x16 = fxp_q4_from_int(limits-
> >pipe.min_bpp);
>  	} else {
> -		int dsc_src_max_bpp, dsc_sink_max_bpp, dsc_max_bpp;
> -		int throughput_max_bpp_x16;
> -		int joiner_max_bpp;
>  		limits->link.min_bpp_x16 =
>  			compute_min_compressed_bpp_x16(connector,
> crtc_state->output_format);
>  
> -		dsc_src_max_bpp =
> dsc_src_max_compressed_bpp(intel_dp);
> -		joiner_max_bpp =
> -			get_max_compressed_bpp_with_joiner(display,
> -							  
> adjusted_mode->crtc_clock,
> -							  
> adjusted_mode->hdisplay,
> -							  
> intel_crtc_num_joined_pipes(crtc_state));
> -		dsc_sink_max_bpp =
> intel_dp_dsc_sink_max_compressed_bpp(connector,
> -
> 									crtc_state->output_format,
> -
> 									limits->pipe.max_bpp / 3);
> -		dsc_max_bpp = min(dsc_sink_max_bpp,
> dsc_src_max_bpp);
> -		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
> -
> -		max_link_bpp_x16 = min(max_link_bpp_x16,
> fxp_q4_from_int(dsc_max_bpp));
> -
> -		throughput_max_bpp_x16 =
> -			dsc_throughput_quirk_max_bpp_x16(connector,
> adjusted_mode->crtc_clock);
> -		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
> -			max_link_bpp_x16 = throughput_max_bpp_x16;
> -
> -			drm_dbg_kms(display->drm,
> -				    "[CRTC:%d:%s][CONNECTOR:%d:%s]
> Decreasing link max bpp to " FXP_Q4_FMT " due to DSC throughput
> quirk\n",
> -				    crtc->base.base.id, crtc-
> >base.name,
> -				    connector->base.base.id,
> connector->base.name,
> -				    FXP_Q4_ARGS(max_link_bpp_x16));
> -		}
> -
>  		max_link_bpp_x16 =
> -			align_max_compressed_bpp_x16(connector,
> -						     crtc_state-
> >output_format,
> -						     limits-
> >pipe.max_bpp,
> -						    
> max_link_bpp_x16);
> +			compute_max_compressed_bpp_x16(connector,
> +						      
> adjusted_mode->crtc_clock,
> +						      
> adjusted_mode->hdisplay,
> +						      
> intel_crtc_num_joined_pipes(crtc_state),
> +						       crtc_state-
> >output_format,
> +						       limits-
> >pipe.max_bpp,
> +						      
> max_link_bpp_x16);
>  	}
>  
>  	limits->link.max_bpp_x16 = max_link_bpp_x16;


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16()
  2025-12-12  9:39   ` Govindapillai, Vinod
@ 2025-12-12 11:01     ` Imre Deak
  2025-12-12 11:41       ` Govindapillai, Vinod
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-12 11:01 UTC (permalink / raw)
  To: Vinod Govindapillai
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Fri, Dec 12, 2025 at 11:39:51AM +0200, Vinod Govindapillai wrote:
> On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > Factor out compute_min_compressed_bpp_x16() also used during mode
> > validation in a follow-up change.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 26 +++++++++++++++++------
> > --
> >  1 file changed, 18 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 193d9c2079347..2a5f5f1b4b128 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -2645,6 +2645,23 @@ dsc_throughput_quirk_max_bpp_x16(const struct
> > intel_connector *connector,
> >  	return fxp_q4_from_int(12);
> >  }
> >  
> > +static int compute_min_compressed_bpp_x16(struct intel_connector *connector,
> > +					  enum intel_output_format output_format)
> > +{
> 
> Could be "const struct intel_connector".

My understanding is that the connector/crtc etc. objects should not be
passed via a const pointer vs. the connector_state/crtc_state etc.
state pointers for these objects which should be const whenever
possible.

> align_min_compressed_bpp_x16() also takes const intel_connector

Yes, but only to match align_max_compressed_bpp_x16() which is also
called from dsc_compute_compressed_bpp(). The latter one can pass only a
const connector pointer to the called function.

> with that,
> 
> Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits
  2025-12-12  9:17   ` Govindapillai, Vinod
@ 2025-12-12 11:09     ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-12 11:09 UTC (permalink / raw)
  To: Vinod Govindapillai
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Fri, Dec 12, 2025 at 11:17:52AM +0200, Vinod Govindapillai wrote:
> [...]
> > +
> > +static int align_max_compressed_bpp_x16(const struct intel_connector *connector,
> > +					enum intel_output_format output_format,
> > +					int pipe_bpp, int max_bpp_x16)
> > +{
> > +	struct intel_display *display = to_intel_display(connector);
> > +	int link_bpp_x16 = intel_dp_output_format_link_bpp_x16(output_format, pipe_bpp);
> > +	int bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
> > +
> > +	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
> > +
> > +	if (DISPLAY_VER(display) >= 13) {
> > +		drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
> > +
> > +		return round_down(max_bpp_x16, bpp_step_x16);
> > +	} else {
> > +		return align_max_vesa_compressed_bpp_x16(max_bpp_x16);
> > +	}
> 
> well.. return align_max_vesa_compressed_bpp_x16(...) could be placed
> without "else loop" as well here and above. 

I'm not aware of a rule saying not to use an else in this case. Imo
here using en else is the more readable form.

> Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> 
> > +}

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16()
  2025-12-12 11:01     ` Imre Deak
@ 2025-12-12 11:41       ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 11:41 UTC (permalink / raw)
  To: Deak, Imre
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Fri, 2025-12-12 at 13:01 +0200, Imre Deak wrote:
> On Fri, Dec 12, 2025 at 11:39:51AM +0200, Vinod Govindapillai wrote:
> > On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > > Factor out compute_min_compressed_bpp_x16() also used during mode
> > > validation in a follow-up change.
> > > 
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_dp.c | 26 +++++++++++++++++--
> > > ----
> > > --
> > >  1 file changed, 18 insertions(+), 8 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > > b/drivers/gpu/drm/i915/display/intel_dp.c
> > > index 193d9c2079347..2a5f5f1b4b128 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > @@ -2645,6 +2645,23 @@ dsc_throughput_quirk_max_bpp_x16(const
> > > struct
> > > intel_connector *connector,
> > >  	return fxp_q4_from_int(12);
> > >  }
> > >  
> > > +static int compute_min_compressed_bpp_x16(struct intel_connector
> > > *connector,
> > > +					  enum
> > > intel_output_format output_format)
> > > +{
> > 
> > Could be "const struct intel_connector".
> 
> My understanding is that the connector/crtc etc. objects should not
> be
> passed via a const pointer vs. the connector_state/crtc_state etc.
> state pointers for these objects which should be const whenever
> possible.

Okay. Good to know. 

BR
Vinod

> 
> > align_min_compressed_bpp_x16() also takes const intel_connector
> 
> Yes, but only to match align_max_compressed_bpp_x16() which is also
> called from dsc_compute_compressed_bpp(). The latter one can pass
> only a
> const connector pointer to the called function.
> 
> > with that,
> > 
> > Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 27/50] drm/i915/dp: Add intel_dp_mode_valid_with_dsc()
  2025-11-27 17:50 ` [PATCH 27/50] drm/i915/dp: Add intel_dp_mode_valid_with_dsc() Imre Deak
@ 2025-12-12 11:43   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 11:43 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Add intel_dp_mode_valid_with_dsc() and call this for an SST/MST mode
> validation to prepare for a follow-up change using a way to verify
> the
> mode's required BW the same way this is done elsewhere during state
> computation (which in turn depends on the mode's effective data rate
> with the corresponding BW overhead).
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c     | 57 +++++++++++++++----
> --
>  drivers/gpu/drm/i915/display/intel_dp.h     |  7 +++
>  drivers/gpu/drm/i915/display/intel_dp_mst.c | 29 ++++-------
>  3 files changed, 57 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 9deb99eda8813..b40edf4febcb7 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1579,24 +1579,20 @@ intel_dp_mode_valid(struct drm_connector
> *_connector,
>  			dsc_slice_count =
>  				drm_dp_dsc_sink_max_slice_count(conn
> ector->dp.dsc_dpcd,
>  								true
> );
> +			dsc = dsc_max_compressed_bpp &&
> dsc_slice_count;
>  		} else if (drm_dp_sink_supports_fec(connector-
> >dp.fec_capability)) {
> -			dsc_max_compressed_bpp =
> -
> 				intel_dp_dsc_get_max_compressed_bpp(display,
> -								   
> max_link_clock,
> -								   
> max_lanes,
> -								   
> target_clock,
> -								   
> mode->hdisplay,
> -								   
> num_joined_pipes,
> -								   
> output_format,
> -								   
> pipe_bpp, 64);
> -			dsc_slice_count =
> -
> 				intel_dp_dsc_get_slice_count(connector,
> -							    
> target_clock,
> -							     mode-
> >hdisplay,
> -							    
> num_joined_pipes);
> +			unsigned long bw_overhead_flags = 0;
> +
> +			if (!drm_dp_is_uhbr_rate(max_link_clock))
> +				bw_overhead_flags |=
> DRM_DP_BW_OVERHEAD_FEC;
> +
> +			dsc =
> intel_dp_mode_valid_with_dsc(connector,
> +							  
> max_link_clock, max_lanes,
> +							  
> target_clock, mode->hdisplay,
> +							  
> num_joined_pipes,
> +							  
> output_format, pipe_bpp,
> +							  
> bw_overhead_flags);
>  		}
> -
> -		dsc = dsc_max_compressed_bpp && dsc_slice_count;
>  	}
>  
>  	if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) &&
> !dsc)
> @@ -2704,6 +2700,35 @@ static int
> compute_max_compressed_bpp_x16(struct intel_connector *connector,
>  	return max_link_bpp_x16;
>  }
>  
> +bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
> +				  int link_clock, int lane_count,
> +				  int mode_clock, int mode_hdisplay,
> +				  int num_joined_pipes,
> +				  enum intel_output_format
> output_format,
> +				  int pipe_bpp, unsigned long
> bw_overhead_flags)

bw_overhead_flags is not used in this patch. But I see that this is
being used in the next patch.

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> +{
> +	struct intel_display *display = to_intel_display(connector);
> +	int dsc_max_compressed_bpp;
> +	int dsc_slice_count;
> +
> +	dsc_max_compressed_bpp =
> +		intel_dp_dsc_get_max_compressed_bpp(display,
> +						    link_clock,
> +						    lane_count,
> +						    mode_clock,
> +						    mode_hdisplay,
> +						   
> num_joined_pipes,
> +						    output_format,
> +						    pipe_bpp, 64);
> +	dsc_slice_count =
> +		intel_dp_dsc_get_slice_count(connector,
> +					     mode_clock,
> +					     mode_hdisplay,
> +					     num_joined_pipes);
> +
> +	return dsc_max_compressed_bpp && dsc_slice_count;
> +}
> +
>  /*
>   * Calculate the output link min, max bpp values in limits based on
> the pipe bpp
>   * range, crtc_state and dsc mode. Return true on success.
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 489b8c945da39..0ec7baec7a8e8 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -150,6 +150,13 @@ u16 intel_dp_dsc_get_max_compressed_bpp(struct
> intel_display *display,
>  					enum intel_output_format
> output_format,
>  					u32 pipe_bpp,
>  					u32 timeslots);
> +
> +bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
> +				  int link_clock, int lane_count,
> +				  int mode_clock, int mode_hdisplay,
> +				  int num_joined_pipes,
> +				  enum intel_output_format
> output_format,
> +				  int pipe_bpp, unsigned long
> bw_overhead_flags);
>  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp,
> int bpp_x16);
>  u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
>  				int mode_clock, int mode_hdisplay,
> diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> index 0db6ed2d9664c..e3f8679e95252 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
> @@ -1462,8 +1462,6 @@ mst_connector_mode_valid_ctx(struct
> drm_connector *_connector,
>  		DRM_DP_BW_OVERHEAD_MST |
> DRM_DP_BW_OVERHEAD_SSC_REF_CLK;
>  	int ret;
>  	bool dsc = false;
> -	u16 dsc_max_compressed_bpp = 0;
> -	u8 dsc_slice_count = 0;
>  	int target_clock = mode->clock;
>  	int num_joined_pipes;
>  
> @@ -1522,31 +1520,22 @@ mst_connector_mode_valid_ctx(struct
> drm_connector *_connector,
>  		return 0;
>  	}
>  
> -	if (intel_dp_has_dsc(connector)) {
> +	if (intel_dp_has_dsc(connector) &&
> drm_dp_sink_supports_fec(connector->dp.fec_capability)) {
>  		/*
>  		 * TBD pass the connector BPC,
>  		 * for now U8_MAX so that max BPC on that platform
> would be picked
>  		 */
>  		int pipe_bpp =
> intel_dp_dsc_compute_max_bpp(connector, U8_MAX);
>  
> -		if (drm_dp_sink_supports_fec(connector-
> >dp.fec_capability)) {
> -			dsc_max_compressed_bpp =
> -
> 				intel_dp_dsc_get_max_compressed_bpp(display,
> -								   
> max_link_clock,
> -								   
> max_lanes,
> -								   
> target_clock,
> -								   
> mode->hdisplay,
> -								   
> num_joined_pipes,
> -								   
> INTEL_OUTPUT_FORMAT_RGB,
> -								   
> pipe_bpp, 64);
> -			dsc_slice_count =
> -
> 				intel_dp_dsc_get_slice_count(connector,
> -							    
> target_clock,
> -							     mode-
> >hdisplay,
> -							    
> num_joined_pipes);
> -		}
> +		if (!drm_dp_is_uhbr_rate(max_link_clock))
> +			bw_overhead_flags |= DRM_DP_BW_OVERHEAD_FEC;
>  
> -		dsc = dsc_max_compressed_bpp && dsc_slice_count;
> +		dsc = intel_dp_mode_valid_with_dsc(connector,
> +						   max_link_clock,
> max_lanes,
> +						   target_clock,
> mode->hdisplay,
> +						   num_joined_pipes,
> +						  
> INTEL_OUTPUT_FORMAT_RGB, pipe_bpp,
> +						  
> bw_overhead_flags);
>  	}
>  
>  	if (intel_dp_joiner_needs_dsc(display, num_joined_pipes) &&
> !dsc) {


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [CI 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation
  2025-11-28 16:20 ` [CI 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation Imre Deak
@ 2025-12-12 13:23   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 13:23 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Fri, 2025-11-28 at 18:20 +0200, Imre Deak wrote:
> Use intel_dp_effective_data_rate() to calculate the required link BW
> for
> compressed streams on non-UHBR DP-SST links. This ensures that the BW
> is
> calculated the same way for all DP output types and DSC/non-DSC
> modes,
> during mode validation as well as during state computation.
> 
> This approach also allows for accounting with BW overhead due to DSC,
> FEC being enabled on a link. Acounting for these will be added by
> follow-up changes.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 27 +++++++++++++++--------
> --
>  1 file changed, 16 insertions(+), 11 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index aa55a81a9a9bf..4044bdbceaef5 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2025,15 +2025,19 @@ static bool
> intel_dp_dsc_supports_format(const struct intel_connector *connector
>  	return drm_dp_dsc_sink_supports_format(connector-
> >dp.dsc_dpcd, sink_dsc_format);
>  }
>  
> -static bool is_bw_sufficient_for_dsc_config(int dsc_bpp_x16, u32
> link_clock,
> -					    u32 lane_count, u32
> mode_clock,
> -					    enum intel_output_format
> output_format,
> -					    int timeslots)
> +static bool is_bw_sufficient_for_dsc_config(struct intel_dp
> *intel_dp,
> +					    int link_clock, int
> lane_count,
> +					    int mode_clock, int
> mode_hdisplay,
> +					    int dsc_slice_count, int
> link_bpp_x16,
> +					    unsigned long
> bw_overhead_flags)
>  {
> -	u32 available_bw, required_bw;
> +	int available_bw;
> +	int required_bw;
>  
> -	available_bw = (link_clock * lane_count * timeslots * 16)  /
> 8;
> -	required_bw = dsc_bpp_x16 *
> (intel_dp_mode_to_fec_clock(mode_clock));
> +	available_bw = intel_dp_max_link_data_rate(intel_dp,
> link_clock, lane_count);
> +	required_bw = intel_dp_link_required(link_clock, lane_count,
> +					     mode_clock,
> mode_hdisplay,
> +					     link_bpp_x16,
> bw_overhead_flags);
>  
>  	return available_bw >= required_bw;
>  }
> @@ -2081,11 +2085,12 @@ static int dsc_compute_link_config(struct
> intel_dp *intel_dp,
>  				if (ret)
>  					continue;
>  			} else {
> -				if
> (!is_bw_sufficient_for_dsc_config(dsc_bpp_x16, link_rate,
> -								    
> lane_count,
> +				if
> (!is_bw_sufficient_for_dsc_config(intel_dp,
> +								    
> link_rate, lane_count,
>  								    
> adjusted_mode->crtc_clock,
> -								    
> pipe_config->output_format,
> -								    
> timeslots))
> +								    
> adjusted_mode->hdisplay,
> +								    
> pipe_config->dsc.slice_count,
> +								    
> dsc_bpp_x16, 0))
>  					continue;
>  			}
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 28/50] drm/i915/dp: Unify detect and compute time DSC mode BW validation
  2025-11-27 17:50 ` [PATCH 28/50] drm/i915/dp: Unify detect and compute time DSC mode BW validation Imre Deak
@ 2025-12-12 14:29   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 14:29 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Atm, a DP DSC video mode's required BW vs. the available BW is
> determined by calculating the maximum compressed BPP value allowed by
> the available BW. Doing that using a closed-form formula as it's done
> atm (vs. an iterative way) is problematic, since the overhead of the
> required BW itself depends on the BPP value being calculated. Instead
> of
> that calculate the required BW for the minimum compressed BPP value
> supported both by the source and the sink and check this BW against
> the
> available BW. This change also aligns the BW calculation during mode
> validation with how this is done during state computation,
> calculating
> the required effective data rate with the corresponding BW overhead.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 137 ++++------------------
> --
>  drivers/gpu/drm/i915/display/intel_dp.h |   8 --
>  2 files changed, 18 insertions(+), 127 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index b40edf4febcb7..8b601994bb138 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -895,49 +895,6 @@ static int align_max_vesa_compressed_bpp_x16(int
> max_link_bpp_x16)
>  	return 0;
>  }
>  
> -static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display
> *display, u32 bpp, u32 pipe_bpp)
> -{
> -	u32 bits_per_pixel = bpp;
> -
> -	/* Error out if the max bpp is less than smallest allowed
> valid bpp */
> -	if (bits_per_pixel < valid_dsc_bpp[0]) {
> -		drm_dbg_kms(display->drm, "Unsupported BPP %u, min
> %u\n",
> -			    bits_per_pixel, valid_dsc_bpp[0]);
> -		return 0;
> -	}
> -
> -	/* From XE_LPD onwards we support from bpc upto uncompressed
> bpp-1 BPPs */
> -	if (DISPLAY_VER(display) >= 13) {
> -		bits_per_pixel = min(bits_per_pixel, pipe_bpp - 1);
> -
> -		/*
> -		 * According to BSpec, 27 is the max DSC output bpp,
> -		 * 8 is the min DSC output bpp.
> -		 * While we can still clamp higher bpp values to 27,
> saving bandwidth,
> -		 * if it is required to oompress up to bpp < 8,
> means we can't do
> -		 * that and probably means we can't fit the required
> mode, even with
> -		 * DSC enabled.
> -		 */
> -		if (bits_per_pixel < 8) {
> -			drm_dbg_kms(display->drm,
> -				    "Unsupported BPP %u, min 8\n",
> -				    bits_per_pixel);
> -			return 0;
> -		}
> -		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
> -	} else {
> -		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
> -
> -		/* Find the nearest match in the array of known BPPs
> from VESA */
> -		link_bpp_x16 =
> align_max_vesa_compressed_bpp_x16(link_bpp_x16);
> -
> -		drm_WARN_ON(display->drm,
> fxp_q4_to_frac(link_bpp_x16));
> -		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
> -	}
> -
> -	return bits_per_pixel;
> -}
> -
>  static int bigjoiner_interface_bits(struct intel_display *display)
>  {
>  	return DISPLAY_VER(display) >= 14 ? 36 : 24;
> @@ -1001,64 +958,6 @@ u32 get_max_compressed_bpp_with_joiner(struct
> intel_display *display,
>  	return max_bpp;
>  }
>  
> -/* TODO: return a bpp_x16 value */
> -u16 intel_dp_dsc_get_max_compressed_bpp(struct intel_display
> *display,
> -					u32 link_clock, u32
> lane_count,
> -					u32 mode_clock, u32
> mode_hdisplay,
> -					int num_joined_pipes,
> -					enum intel_output_format
> output_format,
> -					u32 pipe_bpp,
> -					u32 timeslots)
> -{
> -	u32 bits_per_pixel, joiner_max_bpp;
> -
> -	/*
> -	 * Available Link Bandwidth(Kbits/sec) = (NumberOfLanes)*
> -	 * (LinkSymbolClock)* 8 * (TimeSlots / 64)
> -	 * for SST -> TimeSlots is 64(i.e all TimeSlots that are
> available)
> -	 * for MST -> TimeSlots has to be calculated, based on mode
> requirements
> -	 *
> -	 * Due to FEC overhead, the available bw is reduced to
> 97.2261%.
> -	 * To support the given mode:
> -	 * Bandwidth required should be <= Available link Bandwidth
> * FEC Overhead
> -	 * =>ModeClock * bits_per_pixel <= Available Link Bandwidth
> * FEC Overhead
> -	 * =>bits_per_pixel <= Available link Bandwidth * FEC
> Overhead / ModeClock
> -	 * =>bits_per_pixel <= (NumberOfLanes * LinkSymbolClock) * 8
> (TimeSlots / 64) /
> -	 *		       (ModeClock / FEC Overhead)
> -	 * =>bits_per_pixel <= (NumberOfLanes * LinkSymbolClock *
> TimeSlots) /
> -	 *		       (ModeClock / FEC Overhead * 8)
> -	 */
> -	bits_per_pixel = ((link_clock * lane_count) * timeslots) /
> -			 (intel_dp_mode_to_fec_clock(mode_clock) *
> 8);
> -
> -	/* Bandwidth required for 420 is half, that of 444 format */
> -	if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
> -		bits_per_pixel *= 2;
> -
> -	/*
> -	 * According to DSC 1.2a Section 4.1.1 Table 4.1 the maximum
> -	 * supported PPS value can be 63.9375 and with the further
> -	 * mention that for 420, 422 formats, bpp should be
> programmed double
> -	 * the target bpp restricting our target bpp to be 31.9375
> at max.
> -	 */
> -	if (output_format == INTEL_OUTPUT_FORMAT_YCBCR420)
> -		bits_per_pixel = min_t(u32, bits_per_pixel, 31);
> -
> -	drm_dbg_kms(display->drm, "Max link bpp is %u for %u
> timeslots "
> -				"total bw %u pixel clock %u\n",
> -				bits_per_pixel, timeslots,
> -				(link_clock * lane_count * 8),
> -
> 				intel_dp_mode_to_fec_clock(mode_clock));
> -
> -	joiner_max_bpp = get_max_compressed_bpp_with_joiner(display,
> mode_clock,
> -							   
> mode_hdisplay, num_joined_pipes);
> -	bits_per_pixel = min(bits_per_pixel, joiner_max_bpp);
> -
> -	bits_per_pixel = intel_dp_dsc_nearest_valid_bpp(display,
> bits_per_pixel, pipe_bpp);
> -
> -	return bits_per_pixel;
> -}
> -
>  u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
>  				int mode_clock, int mode_hdisplay,
>  				int num_joined_pipes)
> @@ -2707,26 +2606,26 @@ bool intel_dp_mode_valid_with_dsc(struct
> intel_connector *connector,
>  				  enum intel_output_format
> output_format,
>  				  int pipe_bpp, unsigned long
> bw_overhead_flags)
>  {
> -	struct intel_display *display = to_intel_display(connector);
> -	int dsc_max_compressed_bpp;
> -	int dsc_slice_count;
> +	struct intel_dp *intel_dp = intel_attached_dp(connector);
> +	int min_bpp_x16 = compute_min_compressed_bpp_x16(connector,
> output_format);
> +	int max_bpp_x16 = compute_max_compressed_bpp_x16(connector,
> +							 mode_clock,
> mode_hdisplay,
> +							
> num_joined_pipes,
> +							
> output_format,
> +							 pipe_bpp,
> INT_MAX);
> +	int dsc_slice_count =
> intel_dp_dsc_get_slice_count(connector,
> +							  
> mode_clock,
> +							  
> mode_hdisplay,
> +							  
> num_joined_pipes);
>  
> -	dsc_max_compressed_bpp =
> -		intel_dp_dsc_get_max_compressed_bpp(display,
> -						    link_clock,
> -						    lane_count,
> -						    mode_clock,
> -						    mode_hdisplay,
> -						   
> num_joined_pipes,
> -						    output_format,
> -						    pipe_bpp, 64);
> -	dsc_slice_count =
> -		intel_dp_dsc_get_slice_count(connector,
> -					     mode_clock,
> -					     mode_hdisplay,
> -					     num_joined_pipes);
> +	if (min_bpp_x16 <= 0 || min_bpp_x16 > max_bpp_x16)
> +		return false;
>  
> -	return dsc_max_compressed_bpp && dsc_slice_count;
> +	return is_bw_sufficient_for_dsc_config(intel_dp,
> +					       link_clock,
> lane_count,
> +					       mode_clock,
> mode_hdisplay,
> +					       dsc_slice_count,
> min_bpp_x16,
> +					       bw_overhead_flags);
>  }
>  
>  /*
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.h
> b/drivers/gpu/drm/i915/display/intel_dp.h
> index 0ec7baec7a8e8..25bfbfd291b0a 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.h
> +++ b/drivers/gpu/drm/i915/display/intel_dp.h
> @@ -143,14 +143,6 @@ bool intel_digital_port_connected(struct
> intel_encoder *encoder);
>  bool intel_digital_port_connected_locked(struct intel_encoder
> *encoder);
>  int intel_dp_dsc_compute_max_bpp(const struct intel_connector
> *connector,
>  				 u8 dsc_max_bpc);
> -u16 intel_dp_dsc_get_max_compressed_bpp(struct intel_display
> *display,
> -					u32 link_clock, u32
> lane_count,
> -					u32 mode_clock, u32
> mode_hdisplay,
> -					int num_joined_pipes,
> -					enum intel_output_format
> output_format,
> -					u32 pipe_bpp,
> -					u32 timeslots);
> -
>  bool intel_dp_mode_valid_with_dsc(struct intel_connector *connector,
>  				  int link_clock, int lane_count,
>  				  int mode_clock, int mode_hdisplay,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs
  2025-11-27 17:50 ` [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs Imre Deak
@ 2025-12-12 14:34   ` Govindapillai, Vinod
  2025-12-12 14:39   ` Govindapillai, Vinod
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 14:34 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> The minimum/maximum compressed BPP values are aligned/bounded in
> intel_dp_compute_link_bpp_limits() to the corresponding source
> limits.
> The minimum compressed BPP value doesn't change afterwards, so no
> need
> to align it again, remove that.
> 
> The maximum compressed BPP, which depends on the pipe BPP value still
> needs to be aligned, since the pipe BPP value could change after the
> above limits were computed, via intel_dp_force_dsc_pipe_bpp(). Use
> the
> corresponding helper for this alignment instead of open-coding the
> same.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++------------------
>  1 file changed, 5 insertions(+), 18 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 8b601994bb138..e351774f508db 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2217,20 +2217,15 @@ static int dsc_compute_compressed_bpp(struct
> intel_dp *intel_dp,
>  	struct intel_display *display = to_intel_display(intel_dp);
>  	const struct intel_connector *connector =
> to_intel_connector(conn_state->connector);
>  	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
> -	int link_bpp_x16;
>  	int bpp_x16;
>  	int ret;
>  
> +	min_bpp_x16 = limits->link.min_bpp_x16;
>  	max_bpp_x16 = limits->link.max_bpp_x16;
>  	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
>  
> -	/* Compressed BPP should be less than the Input DSC bpp */
> -	link_bpp_x16 =
> intel_dp_output_format_link_bpp_x16(pipe_config->output_format,
> pipe_bpp);
> -	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
> -
> -	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
> -	min_bpp_x16 = round_up(limits->link.min_bpp_x16,
> bpp_step_x16);
> -	max_bpp_x16 = round_down(max_bpp_x16, bpp_step_x16);
> +	max_bpp_x16 = align_max_compressed_bpp_x16(connector,
> pipe_config->output_format,
> +						   pipe_bpp,
> max_bpp_x16);
>  
>  	for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16
> -= bpp_step_x16) {
>  		if (!intel_dp_dsc_valid_compressed_bpp(intel_dp,
> bpp_x16))
> @@ -2346,8 +2341,6 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	struct intel_connector *connector =
>  		to_intel_connector(conn_state->connector);
>  	int pipe_bpp, forced_bpp;
> -	int dsc_min_bpp;
> -	int dsc_max_bpp;
>  
>  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
>  
> @@ -2367,15 +2360,9 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	pipe_config->port_clock = limits->max_rate;
>  	pipe_config->lane_count = limits->max_lane_count;
>  
> -	dsc_min_bpp = fxp_q4_to_int_roundup(limits-
> >link.min_bpp_x16);
> -
> -	dsc_max_bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
> -
> -	/* Compressed BPP should be less than the Input DSC bpp */
> -	dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
> -
>  	pipe_config->dsc.compressed_bpp_x16 =
> -		fxp_q4_from_int(max(dsc_min_bpp, dsc_max_bpp));
> +		align_max_compressed_bpp_x16(connector, pipe_config-
> >output_format,
> +					     pipe_bpp, limits-
> >link.max_bpp_x16);
>  
>  	pipe_config->pipe_bpp = pipe_bpp;
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs
  2025-11-27 17:50 ` [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs Imre Deak
  2025-12-12 14:34   ` Govindapillai, Vinod
@ 2025-12-12 14:39   ` Govindapillai, Vinod
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 14:39 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> The minimum/maximum compressed BPP values are aligned/bounded in
> intel_dp_compute_link_bpp_limits() to the corresponding source
> limits.
> The minimum compressed BPP value doesn't change afterwards, so no
> need
> to align it again, remove that.
> 
> The maximum compressed BPP, which depends on the pipe BPP value still
> needs to be aligned, since the pipe BPP value could change after the
> above limits were computed, via intel_dp_force_dsc_pipe_bpp(). Use
> the
> corresponding helper for this alignment instead of open-coding the
> same.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 23 +++++------------------
>  1 file changed, 5 insertions(+), 18 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 8b601994bb138..e351774f508db 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2217,20 +2217,15 @@ static int dsc_compute_compressed_bpp(struct
> intel_dp *intel_dp,
>  	struct intel_display *display = to_intel_display(intel_dp);
>  	const struct intel_connector *connector =
> to_intel_connector(conn_state->connector);
>  	int min_bpp_x16, max_bpp_x16, bpp_step_x16;
> -	int link_bpp_x16;
>  	int bpp_x16;
>  	int ret;
>  
> +	min_bpp_x16 = limits->link.min_bpp_x16;
>  	max_bpp_x16 = limits->link.max_bpp_x16;
>  	bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
>  
> -	/* Compressed BPP should be less than the Input DSC bpp */
> -	link_bpp_x16 =
> intel_dp_output_format_link_bpp_x16(pipe_config->output_format,
> pipe_bpp);
> -	max_bpp_x16 = min(max_bpp_x16, link_bpp_x16 - bpp_step_x16);
> -
> -	drm_WARN_ON(display->drm, !is_power_of_2(bpp_step_x16));
> -	min_bpp_x16 = round_up(limits->link.min_bpp_x16,
> bpp_step_x16);
> -	max_bpp_x16 = round_down(max_bpp_x16, bpp_step_x16);
> +	max_bpp_x16 = align_max_compressed_bpp_x16(connector,
> pipe_config->output_format,
> +						   pipe_bpp,
> max_bpp_x16);
>  
>  	for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16
> -= bpp_step_x16) {
>  		if (!intel_dp_dsc_valid_compressed_bpp(intel_dp,
> bpp_x16))
> @@ -2346,8 +2341,6 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	struct intel_connector *connector =
>  		to_intel_connector(conn_state->connector);
>  	int pipe_bpp, forced_bpp;
> -	int dsc_min_bpp;
> -	int dsc_max_bpp;
>  
>  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
>  
> @@ -2367,15 +2360,9 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	pipe_config->port_clock = limits->max_rate;
>  	pipe_config->lane_count = limits->max_lane_count;
>  
> -	dsc_min_bpp = fxp_q4_to_int_roundup(limits-
> >link.min_bpp_x16);
> -
> -	dsc_max_bpp = fxp_q4_to_int(limits->link.max_bpp_x16);
> -
> -	/* Compressed BPP should be less than the Input DSC bpp */
> -	dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
> -
>  	pipe_config->dsc.compressed_bpp_x16 =
> -		fxp_q4_from_int(max(dsc_min_bpp, dsc_max_bpp));
> +		align_max_compressed_bpp_x16(connector, pipe_config-
> >output_format,
> +					     pipe_bpp, limits-
> >link.max_bpp_x16);
>  
>  	pipe_config->pipe_bpp = pipe_bpp;
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 30/50] drm/i915/dp: Simplify computing DSC BPPs for eDP
  2025-11-27 17:50 ` [PATCH 30/50] drm/i915/dp: Simplify computing DSC BPPs for eDP Imre Deak
@ 2025-12-12 14:45   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 14:45 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> The maximum pipe BPP value (used as the DSC input BPP) has been
> aligned
> already to the corresponding source/sink input BPP capabilities in
> intel_dp_compute_config_limits(). So it isn't needed to perform the
> same
> alignment again in intel_edp_dsc_compute_pipe_bpp() called later,
> this
> function can simply use the already aligned maximum pipe BPP value,
> do
> that.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 16 +++-------------
>  1 file changed, 3 insertions(+), 13 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index e351774f508db..ee33759a2f5d7 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2337,26 +2337,16 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  					  struct drm_connector_state
> *conn_state,
>  					  const struct
> link_config_limits *limits)
>  {
> -	struct intel_display *display = to_intel_display(intel_dp);
>  	struct intel_connector *connector =
>  		to_intel_connector(conn_state->connector);
>  	int pipe_bpp, forced_bpp;
>  
>  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
> -
> -	if (forced_bpp) {
> +	if (forced_bpp)
>  		pipe_bpp = forced_bpp;
> -	} else {
> -		int max_bpc = limits->pipe.max_bpp / 3;
> +	else
> +		pipe_bpp = limits->pipe.max_bpp;
>  
> -		/* For eDP use max bpp that can be supported with
> DSC. */
> -		pipe_bpp = intel_dp_dsc_compute_max_bpp(connector,
> max_bpc);
> -		if (!is_dsc_pipe_bpp_sufficient(limits, pipe_bpp)) {
> -			drm_dbg_kms(display->drm,
> -				    "Computed BPC is not in DSC BPC
> limits\n");
> -			return -EINVAL;
> -		}
> -	}
>  	pipe_config->port_clock = limits->max_rate;
>  	pipe_config->lane_count = limits->max_lane_count;
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST
  2025-11-27 17:50 ` [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST Imre Deak
@ 2025-12-12 14:59   ` Govindapillai, Vinod
  2025-12-12 18:41     ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 14:59 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> The maximum pipe BPP value (used as the DSC input BPP) has been
> aligned
> already to the corresponding source/sink input BPP capabilities in
> intel_dp_compute_config_limits(). So it isn't needed to perform the
> same
> alignment again in intel_dp_dsc_compute_pipe_bpp() called later, this
> function can simply use the already aligned maximum pipe BPP value,
> do
> that.
> 
> Also, there is no point in trying pipe BPP values lower than the
> maximum: this would only make dsc_compute_compressed_bpp() start with
> a
> lower _compressed_ BPP value, but this lower compressed BPP value has
> been tried already when dsc_compute_compressed_bpp() was called with
> the
> higher pipe BPP value (i.e. the first dsc_compute_compressed_bpp()
> call
> tries already all the possible compressed BPP values which are all
> below
> the pipe BPP value passed to it). Simplify the function accordingly
> trying only the maximum pipe BPP value.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 29 +++++++----------------
> --
>  1 file changed, 8 insertions(+), 21 deletions(-)
> 

I guess, typically this is a two patch solution. But considering the
code changes it make sense to have it as one patch. 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index ee33759a2f5d7..902f3a054a971 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2294,11 +2294,8 @@ static int
> intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  					 struct drm_connector_state
> *conn_state,
>  					 const struct
> link_config_limits *limits)
>  {
> -	const struct intel_connector *connector =
> -		to_intel_connector(conn_state->connector);
> -	u8 dsc_bpc[3] = {};
>  	int forced_bpp, pipe_bpp;
> -	int num_bpc, i, ret;
> +	int ret;
>  
>  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
>  
> @@ -2311,25 +2308,15 @@ static int
> intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  		}
>  	}
>  
> -	/*
> -	 * Get the maximum DSC bpc that will be supported by any
> valid
> -	 * link configuration and compressed bpp.
> -	 */
> -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector-
> >dp.dsc_dpcd, dsc_bpc);
> -	for (i = 0; i < num_bpc; i++) {
> -		pipe_bpp = dsc_bpc[i] * 3;
> -		if (pipe_bpp < limits->pipe.min_bpp || pipe_bpp >
> limits->pipe.max_bpp)
> -			continue;
> +	pipe_bpp = limits->pipe.max_bpp;
> +	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config,
> conn_state,
> +					 limits, pipe_bpp);
> +	if (ret)
> +		return -EINVAL;
>  
> -		ret = dsc_compute_compressed_bpp(intel_dp,
> pipe_config, conn_state,
> -						 limits, pipe_bpp);
> -		if (ret == 0) {
> -			pipe_config->pipe_bpp = pipe_bpp;
> -			return 0;
> -		}
> -	}
> +	pipe_config->pipe_bpp = pipe_bpp;
>  
> -	return -EINVAL;
> +	return 0;
>  }
>  
>  static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 32/50] drm/i915/dp: Simplify computing forced DSC BPP for DP-SST
  2025-11-27 17:50 ` [PATCH 32/50] drm/i915/dp: Simplify computing forced DSC BPP " Imre Deak
@ 2025-12-12 15:21   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:21 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> If dsc_compute_compressed_bpp() failed with a forced pipe BPP value
> (where the forced pipe BPP value itself is valid within the min/max
> pipe
> BPP limits), the function will also fail when called with the maximum
> pipe BPP value: dsc_compute_compressed_bpp() will try all compressed
> BPPs below the passed in pipe BPP value and if the function failed
> with
> a given (low) compressed BPP value it will also fail with a
> compressed
> BPP value higher than the one which failed already.
> 
> Based on the above remove the logic to retry computing a compressed
> BPP
> value with the maximum pipe BPP value if computing the compressed BPP
> failed already with the (lower) forced pipe BPP value.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 14 ++++----------
>  1 file changed, 4 insertions(+), 10 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 902f3a054a971..a921092e760b5 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2298,17 +2298,11 @@ static int
> intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	int ret;
>  
>  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
> +	if (forced_bpp)
> +		pipe_bpp = forced_bpp;
> +	else
> +		pipe_bpp = limits->pipe.max_bpp;
>  
> -	if (forced_bpp) {
> -		ret = dsc_compute_compressed_bpp(intel_dp,
> pipe_config, conn_state,
> -						 limits,
> forced_bpp);
> -		if (ret == 0) {
> -			pipe_config->pipe_bpp = forced_bpp;
> -			return 0;
> -		}
> -	}
> -
> -	pipe_bpp = limits->pipe.max_bpp;
>  	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config,
> conn_state,
>  					 limits, pipe_bpp);
>  	if (ret)


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 33/50] drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP
  2025-11-27 17:50 ` [PATCH 33/50] drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP Imre Deak
@ 2025-12-12 15:38   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:38 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> Move computing the eDP compressed BPP value to the function computing
> this for DP, allowing further simplifications later.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 21 +++++++++++++--------
>  1 file changed, 13 insertions(+), 8 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index a921092e760b5..81240529337bc 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2226,6 +2226,14 @@ static int dsc_compute_compressed_bpp(struct
> intel_dp *intel_dp,
>  
>  	max_bpp_x16 = align_max_compressed_bpp_x16(connector,
> pipe_config->output_format,
>  						   pipe_bpp,
> max_bpp_x16);
> +	if (intel_dp_is_edp(intel_dp)) {
> +		pipe_config->port_clock = limits->max_rate;
> +		pipe_config->lane_count = limits->max_lane_count;
> +
> +		pipe_config->dsc.compressed_bpp_x16 = max_bpp_x16;
> +
> +		return 0;
> +	}
>  
>  	for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16
> -= bpp_step_x16) {
>  		if (!intel_dp_dsc_valid_compressed_bpp(intel_dp,
> bpp_x16))
> @@ -2318,9 +2326,8 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  					  struct drm_connector_state
> *conn_state,
>  					  const struct
> link_config_limits *limits)
>  {
> -	struct intel_connector *connector =
> -		to_intel_connector(conn_state->connector);
>  	int pipe_bpp, forced_bpp;
> +	int ret;
>  
>  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
>  	if (forced_bpp)
> @@ -2328,12 +2335,10 @@ static int
> intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	else
>  		pipe_bpp = limits->pipe.max_bpp;
>  
> -	pipe_config->port_clock = limits->max_rate;
> -	pipe_config->lane_count = limits->max_lane_count;
> -
> -	pipe_config->dsc.compressed_bpp_x16 =
> -		align_max_compressed_bpp_x16(connector, pipe_config-
> >output_format,
> -					     pipe_bpp, limits-
> >link.max_bpp_x16);
> +	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config,
> conn_state,
> +					 limits, pipe_bpp);
> +	if (ret)
> +		return -EINVAL;
>  
>  	pipe_config->pipe_bpp = pipe_bpp;
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 34/50] drm/i915/dp: Simplify eDP vs. DP compressed BPP computation
  2025-11-27 17:50 ` [PATCH 34/50] drm/i915/dp: Simplify eDP vs. DP compressed BPP computation Imre Deak
@ 2025-12-12 15:39   ` Govindapillai, Vinod
  0 siblings, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:39 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> intel_edp_dsc_compute_pipe_bpp() matches now
> intel_dp_dsc_compute_pipe_bpp(), remove the former function.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 32 ++---------------------
> --
>  1 file changed, 2 insertions(+), 30 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 81240529337bc..de59b93388f41 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2321,30 +2321,6 @@ static int
> intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
>  	return 0;
>  }
>  
> -static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
> -					  struct intel_crtc_state
> *pipe_config,
> -					  struct drm_connector_state
> *conn_state,
> -					  const struct
> link_config_limits *limits)
> -{
> -	int pipe_bpp, forced_bpp;
> -	int ret;
> -
> -	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
> -	if (forced_bpp)
> -		pipe_bpp = forced_bpp;
> -	else
> -		pipe_bpp = limits->pipe.max_bpp;
> -
> -	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config,
> conn_state,
> -					 limits, pipe_bpp);
> -	if (ret)
> -		return -EINVAL;
> -
> -	pipe_config->pipe_bpp = pipe_bpp;
> -
> -	return 0;
> -}
> -
>  /*
>   * Return whether FEC must be enabled for 8b10b SST or MST links. On
> 128b132b
>   * links FEC is always enabled implicitly by the HW, so this
> function returns
> @@ -2396,12 +2372,8 @@ int intel_dp_dsc_compute_config(struct
> intel_dp *intel_dp,
>  	 * figured out for DP MST DSC.
>  	 */
>  	if (!is_mst) {
> -		if (intel_dp_is_edp(intel_dp))
> -			ret =
> intel_edp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
> -							    
> conn_state, limits);
> -		else
> -			ret =
> intel_dp_dsc_compute_pipe_bpp(intel_dp, pipe_config,
> -							   
> conn_state, limits);
> +		ret = intel_dp_dsc_compute_pipe_bpp(intel_dp,
> pipe_config,
> +						    conn_state,
> limits);
>  		if (ret) {
>  			drm_dbg_kms(display->drm,
>  				    "No Valid pipe bpp for given
> mode ret = %d\n", ret);


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  2025-11-27 17:49 ` [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp() Imre Deak
@ 2025-12-12 15:41   ` Govindapillai, Vinod
  2025-12-15  7:46   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:41 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out align_max_sink_dsc_input_bpp(), also used later for
> computing
> the maximum DSC input BPP limit.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++-------
> --
>  1 file changed, 18 insertions(+), 10 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 000fccc39a292..dcb9bc11e677b 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1893,12 +1893,27 @@ int intel_dp_dsc_max_src_input_bpc(struct
> intel_display *display)
>  	return intel_dp_dsc_min_src_input_bpc();
>  }
>  
> +static int align_max_sink_dsc_input_bpp(const struct intel_connector
> *connector,
> +					int max_pipe_bpp)
> +{
> +	u8 dsc_bpc[3];
> +	int num_bpc;
> +	int i;
> +
> +	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector-
> >dp.dsc_dpcd,
> +						       dsc_bpc);
> +	for (i = 0; i < num_bpc; i++) {
> +		if (dsc_bpc[i] * 3 <= max_pipe_bpp)
> +			return dsc_bpc[i] * 3;
> +	}
> +
> +	return 0;
> +}
> +
>  int intel_dp_dsc_compute_max_bpp(const struct intel_connector
> *connector,
>  				 u8 max_req_bpc)
>  {
>  	struct intel_display *display = to_intel_display(connector);
> -	int i, num_bpc;
> -	u8 dsc_bpc[3] = {};
>  	int dsc_max_bpc;
>  
>  	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
> @@ -1908,14 +1923,7 @@ int intel_dp_dsc_compute_max_bpp(const struct
> intel_connector *connector,
>  
>  	dsc_max_bpc = min(dsc_max_bpc, max_req_bpc);
>  
> -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector-
> >dp.dsc_dpcd,
> -						       dsc_bpc);
> -	for (i = 0; i < num_bpc; i++) {
> -		if (dsc_max_bpc >= dsc_bpc[i])
> -			return dsc_bpc[i] * 3;
> -	}
> -
> -	return 0;
> +	return align_max_sink_dsc_input_bpp(connector, dsc_max_bpc *
> 3);
>  }
>  
>  static int intel_dp_source_dsc_version_minor(struct intel_display
> *display)


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  2025-11-27 17:49 ` [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16() Imre Deak
@ 2025-12-12 15:46   ` Govindapillai, Vinod
  2025-12-15  7:49   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:46 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out align_max_vesa_compressed_bpp_x16(), also used later for
> computing the maximum DSC compressed BPP limit.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 35 ++++++++++++++---------
> --
>  1 file changed, 20 insertions(+), 15 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index dcb9bc11e677b..3111758578d6c 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -867,10 +867,23 @@ small_joiner_ram_size_bits(struct intel_display
> *display)
>  		return 6144 * 8;
>  }
>  
> +static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
> +{
> +	int i;
> +
> +	for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
> +		int vesa_bpp_x16 =
> fxp_q4_from_int(valid_dsc_bpp[i]);
> +
> +		if (vesa_bpp_x16 <= max_link_bpp_x16)
> +			return vesa_bpp_x16;
> +	}
> +
> +	return 0;
> +}
> +
>  static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display
> *display, u32 bpp, u32 pipe_bpp)
>  {
>  	u32 bits_per_pixel = bpp;
> -	int i;
>  
>  	/* Error out if the max bpp is less than smallest allowed
> valid bpp */
>  	if (bits_per_pixel < valid_dsc_bpp[0]) {
> @@ -899,15 +912,13 @@ static u32
> intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp
>  		}
>  		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
>  	} else {
> +		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
> +
>  		/* Find the nearest match in the array of known BPPs
> from VESA */
> -		for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++)
> {
> -			if (bits_per_pixel < valid_dsc_bpp[i + 1])
> -				break;
> -		}
> -		drm_dbg_kms(display->drm, "Set dsc bpp from %d to
> VESA %d\n",
> -			    bits_per_pixel, valid_dsc_bpp[i]);
> +		link_bpp_x16 =
> align_max_vesa_compressed_bpp_x16(link_bpp_x16);
>  
> -		bits_per_pixel = valid_dsc_bpp[i];
> +		drm_WARN_ON(display->drm,
> fxp_q4_to_frac(link_bpp_x16));
> +		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
>  	}
>  
>  	return bits_per_pixel;
> @@ -2219,7 +2230,6 @@ int intel_dp_dsc_bpp_step_x16(const struct
> intel_connector *connector)
>  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp,
> int bpp_x16)
>  {
>  	struct intel_display *display = to_intel_display(intel_dp);
> -	int i;
>  
>  	if (DISPLAY_VER(display) >= 13) {
>  		if (intel_dp->force_dsc_fractional_bpp_en &&
> !fxp_q4_to_frac(bpp_x16))
> @@ -2231,12 +2241,7 @@ bool intel_dp_dsc_valid_compressed_bpp(struct
> intel_dp *intel_dp, int bpp_x16)
>  	if (fxp_q4_to_frac(bpp_x16))
>  		return false;
>  
> -	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
> -		if (fxp_q4_to_int(bpp_x16) == valid_dsc_bpp[i])
> -			return true;
> -	}
> -
> -	return false;
> +	return align_max_vesa_compressed_bpp_x16(bpp_x16) ==
> bpp_x16;
>  }
>  
>  /*


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values
  2025-11-27 17:49 ` [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values Imre Deak
@ 2025-12-12 15:48   ` Govindapillai, Vinod
  2025-12-15  7:51   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:48 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Make sure that state computation fails if the minimum/maximum link
> BPP
> values got invalid as a result of limiting both of these values
> separately to the corresponding source/sink capability limits.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 3111758578d6c..545d872a30403 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2654,7 +2654,7 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_dp *intel_dp,
>  	limits->link.max_bpp_x16 = max_link_bpp_x16;
>  
>  	drm_dbg_kms(display->drm,
> -		    "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits:
> pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d
> max link_bpp " FXP_Q4_FMT "\n",
> +		    "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits:
> pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d
> min link_bpp " FXP_Q4_FMT " max link_bpp " FXP_Q4_FMT "\n",
>  		    encoder->base.base.id, encoder->base.name,
>  		    crtc->base.base.id, crtc->base.name,
>  		    adjusted_mode->crtc_clock,
> @@ -2662,8 +2662,13 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_dp *intel_dp,
>  		    limits->max_lane_count,
>  		    limits->max_rate,
>  		    limits->pipe.max_bpp,
> +		    FXP_Q4_ARGS(limits->link.min_bpp_x16),
>  		    FXP_Q4_ARGS(limits->link.max_bpp_x16));
>  
> +	if (limits->link.min_bpp_x16 <= 0 ||
> +	    limits->link.min_bpp_x16 > limits->link.max_bpp_x16)
> +		return false;
> +
>  	return true;
>  }
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value
  2025-11-27 17:49 ` [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value Imre Deak
@ 2025-12-12 15:51   ` Govindapillai, Vinod
  2025-12-15  7:51   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:51 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> There is no reason to accept a minimum/maximum link BPP value above
> the
> maximum throughput BPP value, fail the state computation in this
> case.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 2 --
>  1 file changed, 2 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index 545d872a30403..f97ee8265836a 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2638,8 +2638,6 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_dp *intel_dp,
>  		max_link_bpp_x16 = min(max_link_bpp_x16,
> fxp_q4_from_int(dsc_max_bpp));
>  
>  		throughput_max_bpp_x16 =
> dsc_throughput_quirk_max_bpp_x16(connector, crtc_state);
> -		throughput_max_bpp_x16 =
> clamp(throughput_max_bpp_x16,
> -					       limits-
> >link.min_bpp_x16, max_link_bpp_x16);
>  		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
>  			max_link_bpp_x16 = throughput_max_bpp_x16;
>  


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed BPP value
  2025-11-27 17:49 ` [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed " Imre Deak
@ 2025-12-12 15:52   ` Govindapillai, Vinod
  2025-12-15  7:52   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Govindapillai, Vinod @ 2025-12-12 15:52 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> There is no reason to accept an invalid maximum sink compressed BPP
> value (i.e. 0), fail the state computation in this case.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 

Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>

> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index f97ee8265836a..db7e49c17ca8d 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2631,8 +2631,7 @@ intel_dp_compute_config_link_bpp_limits(struct
> intel_dp *intel_dp,
>  		dsc_sink_max_bpp =
> intel_dp_dsc_sink_max_compressed_bpp(connector,
>  								
> 	crtc_state,
>  								
> 	limits->pipe.max_bpp / 3);
> -		dsc_max_bpp = dsc_sink_max_bpp ?
> -			      min(dsc_sink_max_bpp, dsc_src_max_bpp)
> : dsc_src_max_bpp;
> +		dsc_max_bpp = min(dsc_sink_max_bpp,
> dsc_src_max_bpp);
>  		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
>  
>  		max_link_bpp_x16 = min(max_link_bpp_x16,
> fxp_q4_from_int(dsc_max_bpp));


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config()
  2025-12-11 10:23     ` Imre Deak
@ 2025-12-12 18:03       ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-12 18:03 UTC (permalink / raw)
  To: Jouni Hogander, intel-xe@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org

On Thu, Dec 11, 2025 at 12:23:42PM +0200, Imre Deak wrote:
> On Thu, Dec 11, 2025 at 08:59:25AM +0200, Jouni Hogander wrote:
> > On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:
> > > Simplify things by computing the detailed slice configuration using
> > > intel_dp_dsc_get_slice_config(), instead of open-coding the same.
> > > 
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_dp.c | 35 +++--------------------
> > > --
> > >  1 file changed, 3 insertions(+), 32 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > > b/drivers/gpu/drm/i915/display/intel_dp.c
> > > index 003f4b18c1175..d41c75c6f7831 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > @@ -2387,7 +2387,6 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> > >  		&pipe_config->hw.adjusted_mode;
> > >  	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
> > >  	bool is_mst = intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST);
> > > -	int slices_per_line;
> > >  	int ret;
> > >  
> > >  	/*
> > > @@ -2413,39 +2412,11 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
> > >  		}
> > >  	}
> > >  
> > > -	/* Calculate Slice count */
> > > -	slices_per_line = intel_dp_dsc_get_slice_count(connector,
> > > -						       adjusted_mode->crtc_clock,
> > > -						       adjusted_mode->crtc_hdisplay,
> > > -						       num_joined_pipes);
> > > -	if (!slices_per_line)
> > > +	if (!intel_dp_dsc_get_slice_config(connector, adjusted_mode->crtc_clock,
> > > +					   adjusted_mode->crtc_hdisplay, num_joined_pipes,
> > > +					   &pipe_config->dsc.slice_config))
> > >  		return -EINVAL;
> > >  
> > > -	/*
> > > -	 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
> > > -	 * is greater than the maximum Cdclock and if slice count is even
> > > -	 * then we need to use 2 VDSC instances.
> > > -	 * In case of Ultrajoiner along with 12 slices we need to use 3
> > > -	 * VDSC instances.
> > > -	 */
> > 
> > I'll guess you have considered this comment being useless?
> 
> A stricter condition between pixel clock (mode clock) vs. CD clock is
> described already in intel_dp_dsc_min_slice_count(). I can further
> clarify the comment in that function, mentioning also the above VDSC
> engine 1 ppc limit as a reason for the condition there.

After talking with Ville, the 1 pixel per clock vs. CDCLK limitation is
actually explained and accounted for (increasing the CDCLK if necessary)
in intel_vdsc_min_cdclk(). Another thing to do would be to increase the
number slices and with that the number of VDSC engine streams, so that a
lower CDCLK can be used. This optimization isn't attempted atm, but I'll
add a TODO comment to intel_dp_dsc_get_slice_config() to consider doing
that in the future.

> The 12 slices-per-line / 3 VDSC streams-per-pipe logic is already
> described in intel_dsc_get_slice_config().
> 
> > Anyways, patch looks ok:
> > 
> > Reviewed-by: Jouni Högander <jouni.hogander@intel.com>
> > 
> > > -	pipe_config->dsc.slice_config.pipes_per_line = num_joined_pipes;
> > > -
> > > -	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
> > > -	    slices_per_line == 12)
> > > -		pipe_config->dsc.slice_config.streams_per_pipe = 3;
> > > -	else if (pipe_config->joiner_pipes || slices_per_line > 1)
> > > -		pipe_config->dsc.slice_config.streams_per_pipe = 2;
> > > -	else
> > > -		pipe_config->dsc.slice_config.streams_per_pipe = 1;
> > > -
> > > -	pipe_config->dsc.slice_config.slices_per_stream =
> > > -		slices_per_line /
> > > -		pipe_config->dsc.slice_config.pipes_per_line /
> > > -		pipe_config->dsc.slice_config.streams_per_pipe;
> > > -
> > > -	drm_WARN_ON(display->drm,
> > > -		    intel_dsc_line_slice_count(&pipe_config->dsc.slice_config) != slices_per_line);
> > > -
> > >  	ret = intel_dp_dsc_compute_params(connector, pipe_config);
> > >  	if (ret < 0) {
> > >  		drm_dbg_kms(display->drm,
> > 

^ permalink raw reply	[flat|nested] 137+ messages in thread

* [PATCH v2 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config()
  2025-11-27 17:50 ` [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config() Imre Deak
  2025-12-11  6:55   ` Hogander, Jouni
@ 2025-12-12 18:17   ` Imre Deak
  2025-12-15  6:06     ` Hogander, Jouni
  1 sibling, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-12 18:17 UTC (permalink / raw)
  To: intel-gfx, intel-xe; +Cc: Jouni Högander

Add intel_dp_dsc_get_slice_config() to compute the detailed slice
configuration and determine the slices-per-line value (returned by
intel_dp_dsc_get_slice_count()) using this function.

v2: Fix incorrectly returning false from intel_dp_dsc_min_slice_count()
    due to rebase fail. (Jouni)

Cc: Jouni Högander <jouni.hogander@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 31 ++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index ff776bf3b0366..1808020877d19 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1018,9 +1018,11 @@ static int intel_dp_dsc_min_slice_count(const struct intel_connector *connector,
 	return min_slice_count;
 }
 
-u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
-				int mode_clock, int mode_hdisplay,
-				int num_joined_pipes)
+static bool
+intel_dp_dsc_get_slice_config(const struct intel_connector *connector,
+			      int mode_clock, int mode_hdisplay,
+			      int num_joined_pipes,
+			      struct intel_dsc_slice_config *config_ret)
 {
 	struct intel_display *display = to_intel_display(connector);
 	int min_slice_count =
@@ -1057,8 +1059,11 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		if (mode_hdisplay % slices_per_line)
 			continue;
 
-		if (min_slice_count <= slices_per_line)
-			return slices_per_line;
+		if (min_slice_count <= slices_per_line) {
+			*config_ret = config;
+
+			return true;
+		}
 	}
 
 	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in the mask. */
@@ -1069,7 +1074,21 @@ u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
 		    min_slice_count,
 		    (int)BITS_PER_TYPE(sink_slice_count_mask), &sink_slice_count_mask);
 
-	return 0;
+	return false;
+}
+
+u8 intel_dp_dsc_get_slice_count(const struct intel_connector *connector,
+				int mode_clock, int mode_hdisplay,
+				int num_joined_pipes)
+{
+	struct intel_dsc_slice_config config;
+
+	if (!intel_dp_dsc_get_slice_config(connector,
+					   mode_clock, mode_hdisplay,
+					   num_joined_pipes, &config))
+		return 0;
+
+	return intel_dsc_line_slice_count(&config);
 }
 
 static bool source_can_output(struct intel_dp *intel_dp,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* [PATCH v2 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config()
  2025-11-27 17:50 ` [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config() Imre Deak
  2025-12-11  6:59   ` Hogander, Jouni
@ 2025-12-12 18:17   ` Imre Deak
  1 sibling, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-12 18:17 UTC (permalink / raw)
  To: intel-gfx, intel-xe; +Cc: Jouni Högander

Simplify things by computing the detailed slice configuration using
intel_dp_dsc_get_slice_config(), instead of open-coding the same.

While at it add a TODO comment to intel_dp_dsc_compute_config() to
explore if it's worth increasing the number of VDSC stream engines used,
in order to reduce the minimum CDCLK required.

v2: Add a TODO comment to intel_dp_dsc_compute_config() to explore if
    it's worth increasing the number of slices in order to use a lower
    CDCLK. (Jouni)

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
---
 drivers/gpu/drm/i915/display/intel_dp.c | 41 ++++++-------------------
 1 file changed, 9 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
index 1808020877d19..61b996616f9e7 100644
--- a/drivers/gpu/drm/i915/display/intel_dp.c
+++ b/drivers/gpu/drm/i915/display/intel_dp.c
@@ -1040,6 +1040,12 @@ intel_dp_dsc_get_slice_config(const struct intel_connector *connector,
 	 * ICL:  2x2
 	 * BMG:  2x2, or for ultrajoined 4 pipes: 3x1
 	 * TGL+: 2x4 (TODO: Add support for this)
+	 *
+	 * TODO: Explore if it's worth increasing the number of slices (from 1
+	 * to 2 or 3), so that multiple VDSC engines can be used, thus
+	 * reducing the minimum CDCLK requirement, which in turn is determined
+	 * by the 1 pixel per clock VDSC engine throughput in
+	 * intel_vdsc_min_cdclk().
 	 */
 	for (slices_per_pipe = 1; slices_per_pipe <= 4; slices_per_pipe++) {
 		struct intel_dsc_slice_config config;
@@ -2387,7 +2393,6 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		&pipe_config->hw.adjusted_mode;
 	int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
 	bool is_mst = intel_crtc_has_type(pipe_config, INTEL_OUTPUT_DP_MST);
-	int slices_per_line;
 	int ret;
 
 	/*
@@ -2413,39 +2418,11 @@ int intel_dp_dsc_compute_config(struct intel_dp *intel_dp,
 		}
 	}
 
-	/* Calculate Slice count */
-	slices_per_line = intel_dp_dsc_get_slice_count(connector,
-						       adjusted_mode->crtc_clock,
-						       adjusted_mode->crtc_hdisplay,
-						       num_joined_pipes);
-	if (!slices_per_line)
+	if (!intel_dp_dsc_get_slice_config(connector, adjusted_mode->crtc_clock,
+					   adjusted_mode->crtc_hdisplay, num_joined_pipes,
+					   &pipe_config->dsc.slice_config))
 		return -EINVAL;
 
-	/*
-	 * VDSC engine operates at 1 Pixel per clock, so if peak pixel rate
-	 * is greater than the maximum Cdclock and if slice count is even
-	 * then we need to use 2 VDSC instances.
-	 * In case of Ultrajoiner along with 12 slices we need to use 3
-	 * VDSC instances.
-	 */
-	pipe_config->dsc.slice_config.pipes_per_line = num_joined_pipes;
-
-	if (pipe_config->joiner_pipes && num_joined_pipes == 4 &&
-	    slices_per_line == 12)
-		pipe_config->dsc.slice_config.streams_per_pipe = 3;
-	else if (pipe_config->joiner_pipes || slices_per_line > 1)
-		pipe_config->dsc.slice_config.streams_per_pipe = 2;
-	else
-		pipe_config->dsc.slice_config.streams_per_pipe = 1;
-
-	pipe_config->dsc.slice_config.slices_per_stream =
-		slices_per_line /
-		pipe_config->dsc.slice_config.pipes_per_line /
-		pipe_config->dsc.slice_config.streams_per_pipe;
-
-	drm_WARN_ON(display->drm,
-		    intel_dsc_line_slice_count(&pipe_config->dsc.slice_config) != slices_per_line);
-
 	ret = intel_dp_dsc_compute_params(connector, pipe_config);
 	if (ret < 0) {
 		drm_dbg_kms(display->drm,
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 137+ messages in thread

* Re: [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST
  2025-12-12 14:59   ` Govindapillai, Vinod
@ 2025-12-12 18:41     ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-12 18:41 UTC (permalink / raw)
  To: Vinod Govindapillai
  Cc: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org

On Fri, Dec 12, 2025 at 04:59:31PM +0200, Vinod Govindapillai wrote:
> On Thu, 2025-11-27 at 19:50 +0200, Imre Deak wrote:

> > The maximum pipe BPP value (used as the DSC input BPP) has been
> > aligned already to the corresponding source/sink input BPP
> > capabilities in intel_dp_compute_config_limits(). So it isn't needed
> > to perform the same alignment again in
> > intel_dp_dsc_compute_pipe_bpp() called later, this function can
> > simply use the already aligned maximum pipe BPP value, do that.
> > 
> > Also, there is no point in trying pipe BPP values lower than the
> > maximum: this would only make dsc_compute_compressed_bpp() start
> > with a lower _compressed_ BPP value, but this lower compressed BPP
> > value has been tried already when dsc_compute_compressed_bpp() was
> > called with the higher pipe BPP value (i.e. the first
> > dsc_compute_compressed_bpp() call tries already all the possible
> > compressed BPP values which are all below the pipe BPP value passed
> > to it). Simplify the function accordingly trying only the maximum
> > pipe BPP value.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 29 +++++++----------------
> > --
> >  1 file changed, 8 insertions(+), 21 deletions(-)
> > 
> 
> I guess, typically this is a two patch solution. But considering the
> code changes it make sense to have it as one patch.

Yes, the current code before this patch combines the alignment and
fallback logic, so I found it clearer to remove both in one patch,
instead of tweaking the code in one patch to do only one of these and
then remove the tweak in a separate patch.

> 
> Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> > b/drivers/gpu/drm/i915/display/intel_dp.c
> > index ee33759a2f5d7..902f3a054a971 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -2294,11 +2294,8 @@ static int
> > intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
> >  					 struct drm_connector_state *conn_state,
> >  					 const struct link_config_limits *limits)
> >  {
> > -	const struct intel_connector *connector =
> > -		to_intel_connector(conn_state->connector);
> > -	u8 dsc_bpc[3] = {};
> >  	int forced_bpp, pipe_bpp;
> > -	int num_bpc, i, ret;
> > +	int ret;
> >  
> >  	forced_bpp = intel_dp_force_dsc_pipe_bpp(intel_dp, limits);
> >  
> > @@ -2311,25 +2308,15 @@ static int
> > intel_dp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,
> >  		}
> >  	}
> >  
> > -	/*
> > -	 * Get the maximum DSC bpc that will be supported by any valid
> > -	 * link configuration and compressed bpp.
> > -	 */
> > -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd, dsc_bpc);
> > -	for (i = 0; i < num_bpc; i++) {
> > -		pipe_bpp = dsc_bpc[i] * 3;
> > -		if (pipe_bpp < limits->pipe.min_bpp || pipe_bpp > limits->pipe.max_bpp)
> > -			continue;
> > +	pipe_bpp = limits->pipe.max_bpp;
> > +	ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
> > +					 limits, pipe_bpp);
> > +	if (ret)
> > +		return -EINVAL;
> >  
> > -		ret = dsc_compute_compressed_bpp(intel_dp, pipe_config, conn_state,
> > -						 limits, pipe_bpp);
> > -		if (ret == 0) {
> > -			pipe_config->pipe_bpp = pipe_bpp;
> > -			return 0;
> > -		}
> > -	}
> > +	pipe_config->pipe_bpp = pipe_bpp;
> >  
> > -	return -EINVAL;
> > +	return 0;
> >  }
> >  
> >  static int intel_edp_dsc_compute_pipe_bpp(struct intel_dp *intel_dp,

^ permalink raw reply	[flat|nested] 137+ messages in thread

* ✓ i915.CI.BAT: success for drm/i915/dp: Clean up link BW/DSC slice config computation (rev3)
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (52 preceding siblings ...)
  2025-12-01  9:46 ` ✗ i915.CI.Full: " Patchwork
@ 2025-12-12 20:01 ` Patchwork
  2025-12-13  4:00 ` ✓ i915.CI.Full: " Patchwork
  54 siblings, 0 replies; 137+ messages in thread
From: Patchwork @ 2025-12-12 20:01 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 3914 bytes --]

== Series Details ==

Series: drm/i915/dp: Clean up link BW/DSC slice config computation (rev3)
URL   : https://patchwork.freedesktop.org/series/158180/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_17673 -> Patchwork_158180v3
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/index.html

Participating hosts (43 -> 41)
------------------------------

  Missing    (2): bat-dg2-13 fi-snb-2520m 

Known issues
------------

  Here are the changes found in Patchwork_158180v3 that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@i915_selftest@live:
    - bat-mtlp-8:         [PASS][1] -> [DMESG-FAIL][2] ([i915#12061]) +1 other test dmesg-fail
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-mtlp-8/igt@i915_selftest@live.html
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-mtlp-8/igt@i915_selftest@live.html

  * igt@i915_selftest@live@workarounds:
    - bat-dg2-9:          [PASS][3] -> [DMESG-FAIL][4] ([i915#12061]) +1 other test dmesg-fail
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-dg2-9/igt@i915_selftest@live@workarounds.html
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-dg2-9/igt@i915_selftest@live@workarounds.html

  
#### Possible fixes ####

  * igt@i915_selftest@live@workarounds:
    - bat-arlh-2:         [DMESG-FAIL][5] ([i915#12061]) -> [PASS][6] +1 other test pass
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-arlh-2/igt@i915_selftest@live@workarounds.html
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-arlh-2/igt@i915_selftest@live@workarounds.html
    - bat-dg1-6:          [INCOMPLETE][7] -> [PASS][8] +1 other test pass
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-dg1-6/igt@i915_selftest@live@workarounds.html
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-dg1-6/igt@i915_selftest@live@workarounds.html
    - bat-dg2-14:         [DMESG-FAIL][9] ([i915#12061]) -> [PASS][10] +1 other test pass
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-dg2-14/igt@i915_selftest@live@workarounds.html
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-dg2-14/igt@i915_selftest@live@workarounds.html

  
#### Warnings ####

  * igt@i915_selftest@live:
    - bat-atsm-1:         [DMESG-FAIL][11] ([i915#12061] / [i915#13929]) -> [DMESG-FAIL][12] ([i915#12061] / [i915#14204])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-atsm-1/igt@i915_selftest@live.html
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-atsm-1/igt@i915_selftest@live.html

  * igt@i915_selftest@live@mman:
    - bat-atsm-1:         [DMESG-FAIL][13] ([i915#13929]) -> [DMESG-FAIL][14] ([i915#14204])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/bat-atsm-1/igt@i915_selftest@live@mman.html
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/bat-atsm-1/igt@i915_selftest@live@mman.html

  
  [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061
  [i915#13929]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13929
  [i915#14204]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14204


Build changes
-------------

  * Linux: CI_DRM_17673 -> Patchwork_158180v3

  CI-20190529: 20190529
  CI_DRM_17673: 90eba5e4087d6932c174f97637833862c9f9ec25 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_8664: 28cc709ad89c0ef569569f19f4772d4cca354963 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_158180v3: 90eba5e4087d6932c174f97637833862c9f9ec25 @ git://anongit.freedesktop.org/gfx-ci/linux

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/index.html

[-- Attachment #2: Type: text/html, Size: 5155 bytes --]

^ permalink raw reply	[flat|nested] 137+ messages in thread

* ✓ i915.CI.Full: success for drm/i915/dp: Clean up link BW/DSC slice config computation (rev3)
  2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
                   ` (53 preceding siblings ...)
  2025-12-12 20:01 ` ✓ i915.CI.BAT: success for drm/i915/dp: Clean up link BW/DSC slice config computation (rev3) Patchwork
@ 2025-12-13  4:00 ` Patchwork
  54 siblings, 0 replies; 137+ messages in thread
From: Patchwork @ 2025-12-13  4:00 UTC (permalink / raw)
  To: Imre Deak; +Cc: intel-gfx

[-- Attachment #1: Type: text/plain, Size: 117031 bytes --]

== Series Details ==

Series: drm/i915/dp: Clean up link BW/DSC slice config computation (rev3)
URL   : https://patchwork.freedesktop.org/series/158180/
State : success

== Summary ==

CI Bug Log - changes from CI_DRM_17673_full -> Patchwork_158180v3_full
====================================================

Summary
-------

  **SUCCESS**

  No regressions found.

  

Participating hosts (11 -> 11)
------------------------------

  No changes in participating hosts

Known issues
------------

  Here are the changes found in Patchwork_158180v3_full that come from known issues:

### IGT changes ###

#### Issues hit ####

  * igt@api_intel_bb@blit-reloc-purge-cache:
    - shard-rkl:          NOTRUN -> [SKIP][1] ([i915#8411])
   [1]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@api_intel_bb@blit-reloc-purge-cache.html

  * igt@api_intel_bb@crc32:
    - shard-rkl:          NOTRUN -> [SKIP][2] ([i915#6230])
   [2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@api_intel_bb@crc32.html

  * igt@device_reset@cold-reset-bound:
    - shard-tglu-1:       NOTRUN -> [SKIP][3] ([i915#11078])
   [3]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@device_reset@cold-reset-bound.html
    - shard-dg2:          NOTRUN -> [SKIP][4] ([i915#11078])
   [4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@device_reset@cold-reset-bound.html

  * igt@drm_buddy@drm_buddy@drm_test_buddy_fragmentation_performance:
    - shard-rkl:          NOTRUN -> [DMESG-WARN][5] ([i915#15095]) +1 other test dmesg-warn
   [5]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@drm_buddy@drm_buddy@drm_test_buddy_fragmentation_performance.html

  * igt@gem_bad_reloc@negative-reloc-lut:
    - shard-rkl:          NOTRUN -> [SKIP][6] ([i915#3281]) +6 other tests skip
   [6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@gem_bad_reloc@negative-reloc-lut.html

  * igt@gem_basic@multigpu-create-close:
    - shard-rkl:          NOTRUN -> [SKIP][7] ([i915#7697])
   [7]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_basic@multigpu-create-close.html

  * igt@gem_ccs@ctrl-surf-copy:
    - shard-rkl:          NOTRUN -> [SKIP][8] ([i915#3555] / [i915#9323])
   [8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@gem_ccs@ctrl-surf-copy.html

  * igt@gem_ccs@ctrl-surf-copy-new-ctx:
    - shard-rkl:          NOTRUN -> [SKIP][9] ([i915#9323])
   [9]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@gem_ccs@ctrl-surf-copy-new-ctx.html

  * igt@gem_ccs@large-ctrl-surf-copy:
    - shard-rkl:          NOTRUN -> [SKIP][10] ([i915#13008])
   [10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@gem_ccs@large-ctrl-surf-copy.html

  * igt@gem_ccs@suspend-resume@linear-compressed-compfmt0-lmem0-lmem0:
    - shard-dg2:          NOTRUN -> [INCOMPLETE][11] ([i915#12392] / [i915#13356])
   [11]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-7/igt@gem_ccs@suspend-resume@linear-compressed-compfmt0-lmem0-lmem0.html

  * igt@gem_close_race@multigpu-basic-threads:
    - shard-tglu:         NOTRUN -> [SKIP][12] ([i915#7697])
   [12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@gem_close_race@multigpu-basic-threads.html

  * igt@gem_create@create-ext-cpu-access-big:
    - shard-dg2:          NOTRUN -> [ABORT][13] ([i915#13427])
   [13]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@gem_create@create-ext-cpu-access-big.html

  * igt@gem_create@create-ext-cpu-access-sanity-check:
    - shard-rkl:          NOTRUN -> [SKIP][14] ([i915#6335])
   [14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_create@create-ext-cpu-access-sanity-check.html

  * igt@gem_ctx_sseu@invalid-sseu:
    - shard-rkl:          NOTRUN -> [SKIP][15] ([i915#280])
   [15]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_ctx_sseu@invalid-sseu.html

  * igt@gem_exec_balancer@parallel-balancer:
    - shard-tglu-1:       NOTRUN -> [SKIP][16] ([i915#4525]) +1 other test skip
   [16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@gem_exec_balancer@parallel-balancer.html

  * igt@gem_exec_capture@capture-recoverable:
    - shard-rkl:          NOTRUN -> [SKIP][17] ([i915#6344])
   [17]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_exec_capture@capture-recoverable.html

  * igt@gem_exec_reloc@basic-concurrent16:
    - shard-dg2:          NOTRUN -> [SKIP][18] ([i915#3281]) +3 other tests skip
   [18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@gem_exec_reloc@basic-concurrent16.html

  * igt@gem_lmem_swapping@heavy-verify-random-ccs:
    - shard-rkl:          NOTRUN -> [SKIP][19] ([i915#4613]) +2 other tests skip
   [19]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@gem_lmem_swapping@heavy-verify-random-ccs.html

  * igt@gem_lmem_swapping@parallel-random:
    - shard-glk:          NOTRUN -> [SKIP][20] ([i915#4613]) +4 other tests skip
   [20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk8/igt@gem_lmem_swapping@parallel-random.html

  * igt@gem_lmem_swapping@parallel-random-verify-ccs:
    - shard-tglu-1:       NOTRUN -> [SKIP][21] ([i915#4613]) +3 other tests skip
   [21]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@gem_lmem_swapping@parallel-random-verify-ccs.html

  * igt@gem_lmem_swapping@verify-random:
    - shard-tglu:         NOTRUN -> [SKIP][22] ([i915#4613]) +1 other test skip
   [22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@gem_lmem_swapping@verify-random.html

  * igt@gem_mmap_gtt@ptrace:
    - shard-dg2:          NOTRUN -> [SKIP][23] ([i915#4077]) +6 other tests skip
   [23]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@gem_mmap_gtt@ptrace.html

  * igt@gem_mmap_offset@clear-via-pagefault:
    - shard-mtlp:         [PASS][24] -> [ABORT][25] ([i915#14809]) +1 other test abort
   [24]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-mtlp-6/igt@gem_mmap_offset@clear-via-pagefault.html
   [25]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-mtlp-3/igt@gem_mmap_offset@clear-via-pagefault.html

  * igt@gem_mmap_wc@close:
    - shard-dg2:          NOTRUN -> [SKIP][26] ([i915#4083])
   [26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@gem_mmap_wc@close.html

  * igt@gem_pread@bench:
    - shard-dg2:          NOTRUN -> [SKIP][27] ([i915#3282])
   [27]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@gem_pread@bench.html

  * igt@gem_pread@exhaustion:
    - shard-glk:          NOTRUN -> [WARN][28] ([i915#2658])
   [28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk1/igt@gem_pread@exhaustion.html
    - shard-tglu-1:       NOTRUN -> [WARN][29] ([i915#2658])
   [29]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@gem_pread@exhaustion.html

  * igt@gem_pread@snoop:
    - shard-rkl:          NOTRUN -> [SKIP][30] ([i915#3282]) +5 other tests skip
   [30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@gem_pread@snoop.html

  * igt@gem_pwrite@basic-exhaustion:
    - shard-tglu:         NOTRUN -> [WARN][31] ([i915#2658])
   [31]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@gem_pwrite@basic-exhaustion.html

  * igt@gem_pxp@verify-pxp-stale-buf-optout-execution:
    - shard-rkl:          NOTRUN -> [SKIP][32] ([i915#4270])
   [32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@gem_pxp@verify-pxp-stale-buf-optout-execution.html

  * igt@gem_render_copy@yf-tiled-mc-ccs-to-vebox-yf-tiled:
    - shard-dg2:          NOTRUN -> [SKIP][33] ([i915#5190] / [i915#8428]) +2 other tests skip
   [33]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@gem_render_copy@yf-tiled-mc-ccs-to-vebox-yf-tiled.html

  * igt@gem_userptr_blits@access-control:
    - shard-dg2:          NOTRUN -> [SKIP][34] ([i915#3297])
   [34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@gem_userptr_blits@access-control.html

  * igt@gem_userptr_blits@coherency-sync:
    - shard-tglu-1:       NOTRUN -> [SKIP][35] ([i915#3297])
   [35]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@gem_userptr_blits@coherency-sync.html

  * igt@gem_userptr_blits@readonly-unsync:
    - shard-rkl:          NOTRUN -> [SKIP][36] ([i915#3297])
   [36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_userptr_blits@readonly-unsync.html

  * igt@gen9_exec_parse@allowed-all:
    - shard-tglu:         NOTRUN -> [SKIP][37] ([i915#2527] / [i915#2856]) +1 other test skip
   [37]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@gen9_exec_parse@allowed-all.html

  * igt@gen9_exec_parse@allowed-single:
    - shard-rkl:          NOTRUN -> [SKIP][38] ([i915#2527]) +5 other tests skip
   [38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@gen9_exec_parse@allowed-single.html

  * igt@gen9_exec_parse@shadow-peek:
    - shard-dg2:          NOTRUN -> [SKIP][39] ([i915#2856]) +1 other test skip
   [39]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@gen9_exec_parse@shadow-peek.html

  * igt@i915_module_load@reload-no-display:
    - shard-tglu-1:       NOTRUN -> [DMESG-WARN][40] ([i915#13029] / [i915#14545])
   [40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@i915_module_load@reload-no-display.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-mtlp:         [PASS][41] -> [ABORT][42] ([i915#15342])
   [41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-mtlp-3/igt@i915_module_load@reload-with-fault-injection.html
   [42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-mtlp-7/igt@i915_module_load@reload-with-fault-injection.html

  * igt@i915_pm_freq_api@freq-reset-multiple:
    - shard-rkl:          NOTRUN -> [SKIP][43] ([i915#8399])
   [43]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@i915_pm_freq_api@freq-reset-multiple.html
    - shard-tglu:         NOTRUN -> [SKIP][44] ([i915#8399])
   [44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@i915_pm_freq_api@freq-reset-multiple.html

  * igt@i915_pm_rc6_residency@rc6-idle:
    - shard-tglu-1:       NOTRUN -> [SKIP][45] ([i915#14498])
   [45]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@i915_pm_rc6_residency@rc6-idle.html

  * igt@i915_pm_rpm@system-suspend:
    - shard-glk:          NOTRUN -> [INCOMPLETE][46] ([i915#13356]) +2 other tests incomplete
   [46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk8/igt@i915_pm_rpm@system-suspend.html

  * igt@i915_pm_rps@min-max-config-idle:
    - shard-dg2:          NOTRUN -> [SKIP][47] ([i915#11681] / [i915#6621])
   [47]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@i915_pm_rps@min-max-config-idle.html

  * igt@i915_pm_sseu@full-enable:
    - shard-tglu-1:       NOTRUN -> [SKIP][48] ([i915#4387])
   [48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@i915_pm_sseu@full-enable.html

  * igt@i915_power@sanity:
    - shard-mtlp:         [PASS][49] -> [SKIP][50] ([i915#7984])
   [49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-mtlp-4/igt@i915_power@sanity.html
   [50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-mtlp-6/igt@i915_power@sanity.html

  * igt@i915_query@hwconfig_table:
    - shard-tglu:         NOTRUN -> [SKIP][51] ([i915#6245])
   [51]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@i915_query@hwconfig_table.html

  * igt@i915_selftest@live@workarounds:
    - shard-dg2:          NOTRUN -> [DMESG-FAIL][52] ([i915#12061]) +1 other test dmesg-fail
   [52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@i915_selftest@live@workarounds.html
    - shard-mtlp:         [PASS][53] -> [DMESG-FAIL][54] ([i915#12061]) +1 other test dmesg-fail
   [53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-mtlp-4/igt@i915_selftest@live@workarounds.html
   [54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-mtlp-6/igt@i915_selftest@live@workarounds.html

  * igt@i915_suspend@debugfs-reader:
    - shard-glk:          [PASS][55] -> [INCOMPLETE][56] ([i915#4817])
   [55]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-glk9/igt@i915_suspend@debugfs-reader.html
   [56]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk8/igt@i915_suspend@debugfs-reader.html

  * igt@i915_suspend@forcewake:
    - shard-glk:          NOTRUN -> [INCOMPLETE][57] ([i915#4817]) +2 other tests incomplete
   [57]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk9/igt@i915_suspend@forcewake.html

  * igt@i915_suspend@sysfs-reader:
    - shard-rkl:          [PASS][58] -> [ABORT][59] ([i915#15140])
   [58]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@i915_suspend@sysfs-reader.html
   [59]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-1/igt@i915_suspend@sysfs-reader.html

  * igt@kms_async_flips@alternate-sync-async-flip-atomic@pipe-a-hdmi-a-1:
    - shard-glk:          [PASS][60] -> [FAIL][61] ([i915#14888]) +1 other test fail
   [60]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-glk9/igt@kms_async_flips@alternate-sync-async-flip-atomic@pipe-a-hdmi-a-1.html
   [61]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk9/igt@kms_async_flips@alternate-sync-async-flip-atomic@pipe-a-hdmi-a-1.html

  * igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-1:
    - shard-glk:          [PASS][62] -> [INCOMPLETE][63] ([i915#12761])
   [62]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-glk1/igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-1.html
   [63]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk2/igt@kms_async_flips@async-flip-suspend-resume@pipe-a-hdmi-a-1.html

  * igt@kms_atomic@plane-primary-overlay-mutable-zpos:
    - shard-dg2:          NOTRUN -> [SKIP][64] ([i915#9531])
   [64]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing:
    - shard-dg2:          [PASS][65] -> [FAIL][66] ([i915#5956])
   [65]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-11/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html
   [66]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_atomic_transition@plane-all-modeset-transition-fencing.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-fencing@pipe-a-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [FAIL][67] ([i915#5956])
   [67]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_atomic_transition@plane-all-modeset-transition-fencing@pipe-a-hdmi-a-3.html

  * igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels:
    - shard-glk:          NOTRUN -> [SKIP][68] ([i915#1769])
   [68]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk1/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html
    - shard-tglu-1:       NOTRUN -> [SKIP][69] ([i915#1769] / [i915#3555])
   [69]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_atomic_transition@plane-all-modeset-transition-internal-panels.html

  * igt@kms_big_fb@4-tiled-16bpp-rotate-0:
    - shard-tglu:         NOTRUN -> [SKIP][70] ([i915#5286]) +3 other tests skip
   [70]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@kms_big_fb@4-tiled-16bpp-rotate-0.html

  * igt@kms_big_fb@4-tiled-64bpp-rotate-0:
    - shard-rkl:          NOTRUN -> [SKIP][71] ([i915#5286]) +5 other tests skip
   [71]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_big_fb@4-tiled-64bpp-rotate-0.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-0-hflip:
    - shard-tglu-1:       NOTRUN -> [SKIP][72] ([i915#5286]) +3 other tests skip
   [72]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_big_fb@4-tiled-max-hw-stride-32bpp-rotate-0-hflip.html

  * igt@kms_big_fb@linear-32bpp-rotate-90:
    - shard-rkl:          NOTRUN -> [SKIP][73] ([i915#3638])
   [73]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_big_fb@linear-32bpp-rotate-90.html

  * igt@kms_big_fb@y-tiled-addfb:
    - shard-dg2:          NOTRUN -> [SKIP][74] ([i915#5190])
   [74]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_big_fb@y-tiled-addfb.html

  * igt@kms_big_fb@yf-tiled-16bpp-rotate-270:
    - shard-rkl:          NOTRUN -> [SKIP][75] +17 other tests skip
   [75]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_big_fb@yf-tiled-16bpp-rotate-270.html

  * igt@kms_big_fb@yf-tiled-32bpp-rotate-180:
    - shard-dg2:          NOTRUN -> [SKIP][76] ([i915#4538] / [i915#5190]) +3 other tests skip
   [76]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@kms_big_fb@yf-tiled-32bpp-rotate-180.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-dg2-mc-ccs@pipe-d-hdmi-a-3:
    - shard-dg2:          NOTRUN -> [SKIP][77] ([i915#6095]) +45 other tests skip
   [77]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_ccs@bad-rotation-90-4-tiled-dg2-mc-ccs@pipe-d-hdmi-a-3.html

  * igt@kms_ccs@bad-rotation-90-y-tiled-gen12-mc-ccs:
    - shard-tglu:         NOTRUN -> [SKIP][78] ([i915#6095]) +39 other tests skip
   [78]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_ccs@bad-rotation-90-y-tiled-gen12-mc-ccs.html

  * igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-b-hdmi-a-1:
    - shard-rkl:          NOTRUN -> [SKIP][79] ([i915#6095]) +79 other tests skip
   [79]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-5/igt@kms_ccs@crc-primary-basic-4-tiled-mtl-mc-ccs@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-basic-y-tiled-gen12-rc-ccs@pipe-a-dp-3:
    - shard-dg2:          NOTRUN -> [SKIP][80] ([i915#10307] / [i915#6095]) +132 other tests skip
   [80]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-11/igt@kms_ccs@crc-primary-basic-y-tiled-gen12-rc-ccs@pipe-a-dp-3.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-1:
    - shard-tglu-1:       NOTRUN -> [SKIP][81] ([i915#6095]) +59 other tests skip
   [81]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_ccs@crc-primary-rotation-180-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-rotation-180-yf-tiled-ccs@pipe-d-hdmi-a-1:
    - shard-dg2:          NOTRUN -> [SKIP][82] ([i915#10307] / [i915#10434] / [i915#6095]) +1 other test skip
   [82]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-4/igt@kms_ccs@crc-primary-rotation-180-yf-tiled-ccs@pipe-d-hdmi-a-1.html

  * igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
    - shard-glk10:        NOTRUN -> [INCOMPLETE][83] ([i915#12796]) +1 other test incomplete
   [83]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk10/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs:
    - shard-dg2:          NOTRUN -> [SKIP][84] ([i915#12313])
   [84]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-bmg-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs:
    - shard-tglu:         NOTRUN -> [SKIP][85] ([i915#12313])
   [85]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-lnl-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-y-tiled-ccs@pipe-c-hdmi-a-1:
    - shard-rkl:          NOTRUN -> [SKIP][86] ([i915#14098] / [i915#6095]) +54 other tests skip
   [86]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-5/igt@kms_ccs@crc-sprite-planes-basic-y-tiled-ccs@pipe-c-hdmi-a-1.html

  * igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-3:
    - shard-dg1:          NOTRUN -> [SKIP][87] ([i915#6095]) +143 other tests skip
   [87]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-12/igt@kms_ccs@random-ccs-data-4-tiled-mtl-rc-ccs-cc@pipe-b-hdmi-a-3.html

  * igt@kms_cdclk@mode-transition:
    - shard-tglu:         NOTRUN -> [SKIP][88] ([i915#3742])
   [88]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_cdclk@mode-transition.html

  * igt@kms_chamelium_edid@dp-edid-resolution-list:
    - shard-tglu-1:       NOTRUN -> [SKIP][89] ([i915#11151] / [i915#7828]) +5 other tests skip
   [89]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_chamelium_edid@dp-edid-resolution-list.html

  * igt@kms_chamelium_edid@hdmi-edid-read:
    - shard-rkl:          NOTRUN -> [SKIP][90] ([i915#11151] / [i915#7828]) +8 other tests skip
   [90]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_chamelium_edid@hdmi-edid-read.html

  * igt@kms_chamelium_frames@dp-crc-fast:
    - shard-dg2:          NOTRUN -> [SKIP][91] ([i915#11151] / [i915#7828]) +2 other tests skip
   [91]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_chamelium_frames@dp-crc-fast.html

  * igt@kms_chamelium_frames@hdmi-aspect-ratio:
    - shard-tglu:         NOTRUN -> [SKIP][92] ([i915#11151] / [i915#7828]) +4 other tests skip
   [92]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@kms_chamelium_frames@hdmi-aspect-ratio.html

  * igt@kms_content_protection@content-type-change:
    - shard-tglu:         NOTRUN -> [SKIP][93] ([i915#6944] / [i915#9424])
   [93]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_content_protection@content-type-change.html

  * igt@kms_content_protection@dp-mst-lic-type-1:
    - shard-rkl:          NOTRUN -> [SKIP][94] ([i915#3116])
   [94]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_content_protection@dp-mst-lic-type-1.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-dg2:          NOTRUN -> [SKIP][95] ([i915#3299]) +1 other test skip
   [95]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@mei-interface:
    - shard-rkl:          NOTRUN -> [SKIP][96] ([i915#6944] / [i915#9424]) +1 other test skip
   [96]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_content_protection@mei-interface.html

  * igt@kms_content_protection@srm:
    - shard-rkl:          NOTRUN -> [SKIP][97] ([i915#6944] / [i915#7118])
   [97]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_content_protection@srm.html

  * igt@kms_content_protection@suspend-resume@pipe-a-dp-3:
    - shard-dg2:          NOTRUN -> [FAIL][98] ([i915#7173])
   [98]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-11/igt@kms_content_protection@suspend-resume@pipe-a-dp-3.html

  * igt@kms_cursor_crc@cursor-offscreen-32x10:
    - shard-tglu:         NOTRUN -> [SKIP][99] ([i915#3555])
   [99]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_cursor_crc@cursor-offscreen-32x10.html

  * igt@kms_cursor_crc@cursor-offscreen-512x512:
    - shard-tglu-1:       NOTRUN -> [SKIP][100] ([i915#13049]) +1 other test skip
   [100]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_cursor_crc@cursor-offscreen-512x512.html

  * igt@kms_cursor_crc@cursor-onscreen-256x85:
    - shard-rkl:          [PASS][101] -> [FAIL][102] ([i915#13566])
   [101]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_cursor_crc@cursor-onscreen-256x85.html
   [102]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-5/igt@kms_cursor_crc@cursor-onscreen-256x85.html

  * igt@kms_cursor_crc@cursor-onscreen-256x85@pipe-a-hdmi-a-1:
    - shard-rkl:          NOTRUN -> [FAIL][103] ([i915#13566])
   [103]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-5/igt@kms_cursor_crc@cursor-onscreen-256x85@pipe-a-hdmi-a-1.html

  * igt@kms_cursor_crc@cursor-onscreen-32x32:
    - shard-rkl:          NOTRUN -> [SKIP][104] ([i915#3555]) +3 other tests skip
   [104]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_cursor_crc@cursor-onscreen-32x32.html

  * igt@kms_cursor_crc@cursor-onscreen-512x512:
    - shard-tglu:         NOTRUN -> [SKIP][105] ([i915#13049])
   [105]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_cursor_crc@cursor-onscreen-512x512.html

  * igt@kms_cursor_crc@cursor-random-256x85@pipe-a-hdmi-a-1:
    - shard-tglu:         [PASS][106] -> [FAIL][107] ([i915#13566]) +1 other test fail
   [106]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-tglu-3/igt@kms_cursor_crc@cursor-random-256x85@pipe-a-hdmi-a-1.html
   [107]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-10/igt@kms_cursor_crc@cursor-random-256x85@pipe-a-hdmi-a-1.html

  * igt@kms_cursor_crc@cursor-random-32x10:
    - shard-dg2:          NOTRUN -> [SKIP][108] ([i915#3555])
   [108]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_cursor_crc@cursor-random-32x10.html

  * igt@kms_cursor_crc@cursor-random-512x170:
    - shard-rkl:          NOTRUN -> [SKIP][109] ([i915#13049]) +3 other tests skip
   [109]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_cursor_crc@cursor-random-512x170.html

  * igt@kms_cursor_crc@cursor-sliding-32x10:
    - shard-tglu-1:       NOTRUN -> [SKIP][110] ([i915#3555]) +4 other tests skip
   [110]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_cursor_crc@cursor-sliding-32x10.html

  * igt@kms_cursor_crc@cursor-suspend:
    - shard-rkl:          [PASS][111] -> [INCOMPLETE][112] ([i915#12358] / [i915#14152])
   [111]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-5/igt@kms_cursor_crc@cursor-suspend.html
   [112]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_cursor_crc@cursor-suspend.html

  * igt@kms_cursor_crc@cursor-suspend@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [INCOMPLETE][113] ([i915#12358] / [i915#14152])
   [113]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_cursor_crc@cursor-suspend@pipe-a-hdmi-a-2.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions:
    - shard-tglu-1:       NOTRUN -> [SKIP][114] ([i915#4103])
   [114]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions.html

  * igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size:
    - shard-tglu:         NOTRUN -> [SKIP][115] ([i915#4103])
   [115]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@kms_cursor_legacy@short-busy-flip-before-cursor-atomic-transitions-varying-size.html

  * igt@kms_dirtyfb@psr-dirtyfb-ioctl:
    - shard-rkl:          NOTRUN -> [SKIP][116] ([i915#9723])
   [116]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_dirtyfb@psr-dirtyfb-ioctl.html

  * igt@kms_dither@fb-8bpc-vs-panel-6bpc:
    - shard-rkl:          NOTRUN -> [SKIP][117] ([i915#3555] / [i915#3804])
   [117]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_dither@fb-8bpc-vs-panel-6bpc.html

  * igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-2:
    - shard-rkl:          NOTRUN -> [SKIP][118] ([i915#3804])
   [118]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_dither@fb-8bpc-vs-panel-6bpc@pipe-a-hdmi-a-2.html

  * igt@kms_dp_aux_dev:
    - shard-rkl:          NOTRUN -> [SKIP][119] ([i915#1257])
   [119]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_dp_aux_dev.html

  * igt@kms_dp_link_training@non-uhbr-sst:
    - shard-rkl:          NOTRUN -> [SKIP][120] ([i915#13749])
   [120]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_dp_link_training@non-uhbr-sst.html
    - shard-tglu:         NOTRUN -> [SKIP][121] ([i915#13749])
   [121]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_dp_link_training@non-uhbr-sst.html

  * igt@kms_dp_linktrain_fallback@dp-fallback:
    - shard-tglu:         NOTRUN -> [SKIP][122] ([i915#13707])
   [122]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_dp_linktrain_fallback@dp-fallback.html

  * igt@kms_dsc@dsc-with-bpc:
    - shard-rkl:          NOTRUN -> [SKIP][123] ([i915#3555] / [i915#3840])
   [123]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_dsc@dsc-with-bpc.html

  * igt@kms_dsc@dsc-with-formats:
    - shard-dg2:          NOTRUN -> [SKIP][124] ([i915#3555] / [i915#3840])
   [124]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_dsc@dsc-with-formats.html

  * igt@kms_dsc@dsc-with-output-formats-with-bpc:
    - shard-dg2:          NOTRUN -> [SKIP][125] ([i915#3840] / [i915#9053])
   [125]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@kms_dsc@dsc-with-output-formats-with-bpc.html

  * igt@kms_fbcon_fbt@psr-suspend:
    - shard-tglu-1:       NOTRUN -> [SKIP][126] ([i915#3469])
   [126]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_fbcon_fbt@psr-suspend.html

  * igt@kms_feature_discovery@chamelium:
    - shard-tglu-1:       NOTRUN -> [SKIP][127] ([i915#2065] / [i915#4854])
   [127]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_feature_discovery@chamelium.html

  * igt@kms_feature_discovery@display-3x:
    - shard-dg2:          NOTRUN -> [SKIP][128] ([i915#1839])
   [128]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_feature_discovery@display-3x.html

  * igt@kms_feature_discovery@psr2:
    - shard-rkl:          NOTRUN -> [SKIP][129] ([i915#658])
   [129]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_feature_discovery@psr2.html

  * igt@kms_flip@2x-flip-vs-dpms:
    - shard-tglu-1:       NOTRUN -> [SKIP][130] ([i915#3637] / [i915#9934]) +3 other tests skip
   [130]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_flip@2x-flip-vs-dpms.html

  * igt@kms_flip@2x-flip-vs-panning-vs-hang:
    - shard-dg2:          NOTRUN -> [SKIP][131] ([i915#9934]) +5 other tests skip
   [131]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_flip@2x-flip-vs-panning-vs-hang.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible:
    - shard-glk:          NOTRUN -> [INCOMPLETE][132] ([i915#12745] / [i915#4839])
   [132]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk1/igt@kms_flip@2x-flip-vs-suspend-interruptible.html

  * igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-hdmi-a1-hdmi-a2:
    - shard-glk:          NOTRUN -> [INCOMPLETE][133] ([i915#4839])
   [133]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk1/igt@kms_flip@2x-flip-vs-suspend-interruptible@ab-hdmi-a1-hdmi-a2.html

  * igt@kms_flip@2x-modeset-vs-vblank-race:
    - shard-rkl:          NOTRUN -> [SKIP][134] ([i915#9934]) +6 other tests skip
   [134]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_flip@2x-modeset-vs-vblank-race.html

  * igt@kms_flip@2x-plain-flip-interruptible:
    - shard-tglu:         NOTRUN -> [SKIP][135] ([i915#3637] / [i915#9934]) +3 other tests skip
   [135]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@kms_flip@2x-plain-flip-interruptible.html

  * igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-upscaling:
    - shard-tglu-1:       NOTRUN -> [SKIP][136] ([i915#2672] / [i915#3555])
   [136]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_flip_scaled_crc@flip-32bpp-4tile-to-64bpp-4tile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling:
    - shard-tglu:         NOTRUN -> [SKIP][137] ([i915#2672] / [i915#3555]) +2 other tests skip
   [137]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-valid-mode:
    - shard-rkl:          NOTRUN -> [SKIP][138] ([i915#2672]) +5 other tests skip
   [138]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-valid-mode.html
    - shard-tglu:         NOTRUN -> [SKIP][139] ([i915#2587] / [i915#2672]) +2 other tests skip
   [139]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-32bpp-yftileccs-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling:
    - shard-tglu-1:       NOTRUN -> [SKIP][140] ([i915#2587] / [i915#2672] / [i915#3555])
   [140]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling@pipe-a-valid-mode:
    - shard-tglu-1:       NOTRUN -> [SKIP][141] ([i915#2587] / [i915#2672]) +1 other test skip
   [141]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_flip_scaled_crc@flip-32bpp-ytileccs-to-64bpp-ytile-upscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling:
    - shard-rkl:          NOTRUN -> [SKIP][142] ([i915#2672] / [i915#3555]) +5 other tests skip
   [142]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_flip_scaled_crc@flip-64bpp-4tile-to-32bpp-4tiledg2rcccs-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling:
    - shard-dg2:          NOTRUN -> [SKIP][143] ([i915#2672] / [i915#3555]) +1 other test skip
   [143]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_flip_scaled_crc@flip-64bpp-ytile-to-32bpp-ytilegen12rcccs-upscaling.html

  * igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-render:
    - shard-glk:          NOTRUN -> [DMESG-FAIL][144] ([i915#118])
   [144]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk6/igt@kms_frontbuffer_tracking@fbc-1p-primscrn-spr-indfb-draw-render.html

  * igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt:
    - shard-tglu-1:       NOTRUN -> [SKIP][145] +48 other tests skip
   [145]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_frontbuffer_tracking@fbc-2p-primscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-4:
    - shard-tglu:         NOTRUN -> [SKIP][146] ([i915#5439])
   [146]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_frontbuffer_tracking@fbc-tiling-4.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-blt:
    - shard-rkl:          NOTRUN -> [SKIP][147] ([i915#15102]) +3 other tests skip
   [147]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-wc:
    - shard-tglu-1:       NOTRUN -> [SKIP][148] ([i915#15102]) +15 other tests skip
   [148]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-wc.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-pri-indfb-multidraw:
    - shard-rkl:          NOTRUN -> [SKIP][149] ([i915#15102] / [i915#3023]) +18 other tests skip
   [149]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-pri-indfb-multidraw.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt:
    - shard-snb:          NOTRUN -> [SKIP][150] +68 other tests skip
   [150]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-snb6/igt@kms_frontbuffer_tracking@fbcpsr-1p-primscrn-indfb-plflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-cpu:
    - shard-dg2:          NOTRUN -> [SKIP][151] ([i915#5354]) +9 other tests skip
   [151]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt:
    - shard-rkl:          NOTRUN -> [SKIP][152] ([i915#1825]) +28 other tests skip
   [152]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-shrfb-pgflip-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-mmap-gtt:
    - shard-dg2:          NOTRUN -> [SKIP][153] ([i915#8708]) +8 other tests skip
   [153]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-mmap-gtt:
    - shard-tglu:         NOTRUN -> [SKIP][154] ([i915#15102]) +12 other tests skip
   [154]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@kms_frontbuffer_tracking@fbcpsr-rgb101010-draw-mmap-gtt.html

  * igt@kms_frontbuffer_tracking@pipe-fbc-rte:
    - shard-rkl:          NOTRUN -> [SKIP][155] ([i915#9766])
   [155]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_frontbuffer_tracking@pipe-fbc-rte.html

  * igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-shrfb-draw-mmap-cpu:
    - shard-dg2:          NOTRUN -> [SKIP][156] ([i915#15102])
   [156]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_frontbuffer_tracking@psr-1p-offscreen-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-mmap-cpu:
    - shard-tglu:         NOTRUN -> [SKIP][157] +33 other tests skip
   [157]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_frontbuffer_tracking@psr-2p-primscrn-pri-shrfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@psr-rgb101010-draw-render:
    - shard-dg2:          NOTRUN -> [SKIP][158] ([i915#15102] / [i915#3458]) +3 other tests skip
   [158]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_frontbuffer_tracking@psr-rgb101010-draw-render.html

  * igt@kms_hdr@bpc-switch:
    - shard-rkl:          [PASS][159] -> [SKIP][160] ([i915#3555] / [i915#8228]) +1 other test skip
   [159]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_hdr@bpc-switch.html
   [160]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_hdr@bpc-switch.html
    - shard-tglu:         NOTRUN -> [SKIP][161] ([i915#3555] / [i915#8228])
   [161]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_hdr@bpc-switch.html

  * igt@kms_hdr@brightness-with-hdr:
    - shard-rkl:          NOTRUN -> [SKIP][162] ([i915#1187] / [i915#12713])
   [162]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_hdr@brightness-with-hdr.html

  * igt@kms_hdr@invalid-hdr:
    - shard-tglu-1:       NOTRUN -> [SKIP][163] ([i915#3555] / [i915#8228])
   [163]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_hdr@invalid-hdr.html

  * igt@kms_hdr@static-swap:
    - shard-dg2:          NOTRUN -> [SKIP][164] ([i915#3555] / [i915#8228]) +1 other test skip
   [164]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_hdr@static-swap.html

  * igt@kms_hdr@static-toggle:
    - shard-dg2:          [PASS][165] -> [SKIP][166] ([i915#3555] / [i915#8228])
   [165]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-11/igt@kms_hdr@static-toggle.html
   [166]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_hdr@static-toggle.html

  * igt@kms_joiner@basic-big-joiner:
    - shard-rkl:          NOTRUN -> [SKIP][167] ([i915#10656])
   [167]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_joiner@basic-big-joiner.html
    - shard-tglu:         NOTRUN -> [SKIP][168] ([i915#10656])
   [168]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_joiner@basic-big-joiner.html

  * igt@kms_joiner@basic-force-big-joiner:
    - shard-rkl:          NOTRUN -> [SKIP][169] ([i915#12388])
   [169]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_joiner@basic-force-big-joiner.html

  * igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner:
    - shard-rkl:          NOTRUN -> [SKIP][170] ([i915#13522])
   [170]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html
    - shard-tglu:         NOTRUN -> [SKIP][171] ([i915#13522])
   [171]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_joiner@switch-modeset-ultra-joiner-big-joiner.html

  * igt@kms_multipipe_modeset@basic-max-pipe-crc-check:
    - shard-tglu-1:       NOTRUN -> [SKIP][172] ([i915#1839])
   [172]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_multipipe_modeset@basic-max-pipe-crc-check.html

  * igt@kms_pipe_stress@stress-xrgb8888-4tiled:
    - shard-tglu-1:       NOTRUN -> [SKIP][173] ([i915#14712])
   [173]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_pipe_stress@stress-xrgb8888-4tiled.html

  * igt@kms_plane_alpha_blend@constant-alpha-max:
    - shard-glk10:        NOTRUN -> [FAIL][174] ([i915#10647] / [i915#12169])
   [174]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk10/igt@kms_plane_alpha_blend@constant-alpha-max.html

  * igt@kms_plane_alpha_blend@constant-alpha-max@pipe-a-hdmi-a-1:
    - shard-glk10:        NOTRUN -> [FAIL][175] ([i915#10647]) +1 other test fail
   [175]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk10/igt@kms_plane_alpha_blend@constant-alpha-max@pipe-a-hdmi-a-1.html

  * igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-2-size-256:
    - shard-rkl:          [PASS][176] -> [FAIL][177] ([i915#15305]) +2 other tests fail
   [176]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-2-size-256.html
   [177]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-2-size-256.html

  * igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-2-size-64:
    - shard-rkl:          [PASS][178] -> [FAIL][179] ([i915#15385])
   [178]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-2-size-64.html
   [179]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_plane_cursor@overlay@pipe-a-hdmi-a-2-size-64.html

  * igt@kms_plane_multiple@2x-tiling-4:
    - shard-rkl:          NOTRUN -> [SKIP][180] ([i915#13958])
   [180]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_plane_multiple@2x-tiling-4.html

  * igt@kms_plane_multiple@2x-tiling-none:
    - shard-tglu-1:       NOTRUN -> [SKIP][181] ([i915#13958])
   [181]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_plane_multiple@2x-tiling-none.html

  * igt@kms_plane_multiple@tiling-yf:
    - shard-tglu:         NOTRUN -> [SKIP][182] ([i915#14259])
   [182]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@kms_plane_multiple@tiling-yf.html

  * igt@kms_plane_scaling@2x-scaler-multi-pipe:
    - shard-dg2:          NOTRUN -> [SKIP][183] ([i915#13046] / [i915#5354] / [i915#9423])
   [183]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_plane_scaling@2x-scaler-multi-pipe.html

  * igt@kms_plane_scaling@intel-max-src-size:
    - shard-rkl:          NOTRUN -> [SKIP][184] ([i915#6953])
   [184]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_plane_scaling@intel-max-src-size.html

  * igt@kms_pm_backlight@bad-brightness:
    - shard-rkl:          NOTRUN -> [SKIP][185] ([i915#5354])
   [185]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_pm_backlight@bad-brightness.html

  * igt@kms_pm_backlight@fade:
    - shard-tglu-1:       NOTRUN -> [SKIP][186] ([i915#9812])
   [186]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_pm_backlight@fade.html

  * igt@kms_pm_dc@dc5-psr:
    - shard-rkl:          NOTRUN -> [SKIP][187] ([i915#9685])
   [187]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_pm_dc@dc5-psr.html

  * igt@kms_pm_dc@dc9-dpms:
    - shard-tglu-1:       NOTRUN -> [SKIP][188] ([i915#4281])
   [188]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_pm_dc@dc9-dpms.html

  * igt@kms_pm_rpm@dpms-lpsp:
    - shard-dg2:          [PASS][189] -> [SKIP][190] ([i915#15073]) +1 other test skip
   [189]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-4/igt@kms_pm_rpm@dpms-lpsp.html
   [190]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-8/igt@kms_pm_rpm@dpms-lpsp.html

  * igt@kms_pm_rpm@modeset-lpsp-stress:
    - shard-rkl:          [PASS][191] -> [SKIP][192] ([i915#15073]) +1 other test skip
   [191]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-5/igt@kms_pm_rpm@modeset-lpsp-stress.html
   [192]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_pm_rpm@modeset-lpsp-stress.html

  * igt@kms_pm_rpm@modeset-non-lpsp:
    - shard-tglu-1:       NOTRUN -> [SKIP][193] ([i915#15073])
   [193]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_pm_rpm@modeset-non-lpsp.html

  * igt@kms_pm_rpm@modeset-non-lpsp-stress:
    - shard-rkl:          NOTRUN -> [SKIP][194] ([i915#15073]) +2 other tests skip
   [194]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_pm_rpm@modeset-non-lpsp-stress.html

  * igt@kms_pm_rpm@system-suspend-idle:
    - shard-dg2:          NOTRUN -> [INCOMPLETE][195] ([i915#14419])
   [195]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@kms_pm_rpm@system-suspend-idle.html

  * igt@kms_prime@basic-crc-vgem:
    - shard-dg2:          NOTRUN -> [SKIP][196] ([i915#6524] / [i915#6805])
   [196]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_prime@basic-crc-vgem.html

  * igt@kms_prime@basic-modeset-hybrid:
    - shard-tglu-1:       NOTRUN -> [SKIP][197] ([i915#6524])
   [197]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_prime@basic-modeset-hybrid.html

  * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf:
    - shard-glk:          NOTRUN -> [SKIP][198] ([i915#11520]) +11 other tests skip
   [198]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk1/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-sf.html

  * igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf:
    - shard-glk10:        NOTRUN -> [SKIP][199] ([i915#11520]) +1 other test skip
   [199]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk10/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html
    - shard-dg2:          NOTRUN -> [SKIP][200] ([i915#11520]) +1 other test skip
   [200]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_psr2_sf@fbc-psr2-cursor-plane-update-sf.html

  * igt@kms_psr2_sf@fbc-psr2-primary-plane-update-sf-dmg-area:
    - shard-tglu-1:       NOTRUN -> [SKIP][201] ([i915#11520]) +4 other tests skip
   [201]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_psr2_sf@fbc-psr2-primary-plane-update-sf-dmg-area.html

  * igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf:
    - shard-rkl:          NOTRUN -> [SKIP][202] ([i915#11520]) +9 other tests skip
   [202]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_psr2_sf@pr-overlay-plane-update-continuous-sf.html

  * igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area-big-fb:
    - shard-snb:          NOTRUN -> [SKIP][203] ([i915#11520]) +1 other test skip
   [203]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-snb6/igt@kms_psr2_sf@pr-primary-plane-update-sf-dmg-area-big-fb.html

  * igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf:
    - shard-tglu:         NOTRUN -> [SKIP][204] ([i915#11520]) +3 other tests skip
   [204]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_psr2_sf@psr2-overlay-plane-move-continuous-exceed-fully-sf.html

  * igt@kms_psr2_su@frontbuffer-xrgb8888:
    - shard-rkl:          NOTRUN -> [SKIP][205] ([i915#9683]) +1 other test skip
   [205]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_psr2_su@frontbuffer-xrgb8888.html

  * igt@kms_psr@fbc-psr-sprite-blt:
    - shard-glk10:        NOTRUN -> [SKIP][206] +58 other tests skip
   [206]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk10/igt@kms_psr@fbc-psr-sprite-blt.html

  * igt@kms_psr@fbc-psr2-cursor-mmap-gtt:
    - shard-glk:          NOTRUN -> [SKIP][207] +385 other tests skip
   [207]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk1/igt@kms_psr@fbc-psr2-cursor-mmap-gtt.html

  * igt@kms_psr@pr-sprite-plane-onoff:
    - shard-tglu-1:       NOTRUN -> [SKIP][208] ([i915#9732]) +16 other tests skip
   [208]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_psr@pr-sprite-plane-onoff.html

  * igt@kms_psr@psr-cursor-render:
    - shard-rkl:          NOTRUN -> [SKIP][209] ([i915#1072] / [i915#9732]) +21 other tests skip
   [209]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_psr@psr-cursor-render.html

  * igt@kms_psr@psr2-cursor-plane-onoff:
    - shard-tglu:         NOTRUN -> [SKIP][210] ([i915#9732]) +13 other tests skip
   [210]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_psr@psr2-cursor-plane-onoff.html

  * igt@kms_psr@psr2-sprite-render:
    - shard-dg2:          NOTRUN -> [SKIP][211] ([i915#1072] / [i915#9732]) +5 other tests skip
   [211]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@kms_psr@psr2-sprite-render.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0:
    - shard-tglu:         NOTRUN -> [SKIP][212] ([i915#5289])
   [212]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-6/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-0.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180:
    - shard-rkl:          NOTRUN -> [SKIP][213] ([i915#5289])
   [213]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-180.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90:
    - shard-tglu-1:       NOTRUN -> [SKIP][214] ([i915#5289]) +1 other test skip
   [214]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-90.html

  * igt@kms_rotation_crc@sprite-rotation-90-pos-100-0:
    - shard-dg2:          NOTRUN -> [SKIP][215] ([i915#12755])
   [215]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@kms_rotation_crc@sprite-rotation-90-pos-100-0.html

  * igt@kms_selftest@drm_framebuffer:
    - shard-glk:          NOTRUN -> [ABORT][216] ([i915#13179]) +1 other test abort
   [216]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk6/igt@kms_selftest@drm_framebuffer.html

  * igt@kms_vblank@ts-continuation-dpms-suspend:
    - shard-glk10:        NOTRUN -> [INCOMPLETE][217] ([i915#12276]) +1 other test incomplete
   [217]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-glk10/igt@kms_vblank@ts-continuation-dpms-suspend.html

  * igt@kms_vrr@flip-dpms:
    - shard-rkl:          NOTRUN -> [SKIP][218] ([i915#15243] / [i915#3555])
   [218]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_vrr@flip-dpms.html

  * igt@kms_vrr@lobf:
    - shard-rkl:          NOTRUN -> [SKIP][219] ([i915#11920])
   [219]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_vrr@lobf.html

  * igt@kms_vrr@max-min:
    - shard-tglu-1:       NOTRUN -> [SKIP][220] ([i915#9906]) +1 other test skip
   [220]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@kms_vrr@max-min.html

  * igt@kms_vrr@seamless-rr-switch-drrs:
    - shard-rkl:          NOTRUN -> [SKIP][221] ([i915#9906])
   [221]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_vrr@seamless-rr-switch-drrs.html

  * igt@kms_vrr@seamless-rr-switch-vrr:
    - shard-tglu:         NOTRUN -> [SKIP][222] ([i915#9906])
   [222]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-9/igt@kms_vrr@seamless-rr-switch-vrr.html

  * igt@perf@global-sseu-config-invalid:
    - shard-dg2:          NOTRUN -> [SKIP][223] ([i915#7387])
   [223]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-1/igt@perf@global-sseu-config-invalid.html

  * igt@perf@per-context-mode-unprivileged:
    - shard-rkl:          NOTRUN -> [SKIP][224] ([i915#2435])
   [224]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@perf@per-context-mode-unprivileged.html

  * igt@perf@unprivileged-single-ctx-counters:
    - shard-rkl:          NOTRUN -> [SKIP][225] ([i915#2433])
   [225]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@perf@unprivileged-single-ctx-counters.html

  * igt@perf_pmu@busy-double-start@vecs1:
    - shard-dg2:          [PASS][226] -> [FAIL][227] ([i915#4349]) +4 other tests fail
   [226]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-11/igt@perf_pmu@busy-double-start@vecs1.html
   [227]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@perf_pmu@busy-double-start@vecs1.html

  * igt@perf_pmu@rc6@other-idle-gt0:
    - shard-rkl:          NOTRUN -> [SKIP][228] ([i915#8516])
   [228]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@perf_pmu@rc6@other-idle-gt0.html

  * igt@prime_udl:
    - shard-dg2:          NOTRUN -> [SKIP][229] +4 other tests skip
   [229]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-5/igt@prime_udl.html

  * igt@prime_vgem@basic-read:
    - shard-rkl:          NOTRUN -> [SKIP][230] ([i915#3291] / [i915#3708])
   [230]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@prime_vgem@basic-read.html

  * igt@prime_vgem@fence-read-hang:
    - shard-dg2:          NOTRUN -> [SKIP][231] ([i915#3708]) +1 other test skip
   [231]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@prime_vgem@fence-read-hang.html

  * igt@sriov_basic@enable-vfs-autoprobe-off@numvfs-random:
    - shard-tglu-1:       NOTRUN -> [FAIL][232] ([i915#12910]) +9 other tests fail
   [232]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-1/igt@sriov_basic@enable-vfs-autoprobe-off@numvfs-random.html

  * igt@sriov_basic@enable-vfs-autoprobe-on:
    - shard-tglu:         NOTRUN -> [FAIL][233] ([i915#12910]) +9 other tests fail
   [233]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-tglu-7/igt@sriov_basic@enable-vfs-autoprobe-on.html

  
#### Possible fixes ####

  * igt@gem_ctx_freq@sysfs@gt0:
    - shard-dg2:          [FAIL][234] ([i915#9561]) -> [PASS][235] +1 other test pass
   [234]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-1/igt@gem_ctx_freq@sysfs@gt0.html
   [235]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-7/igt@gem_ctx_freq@sysfs@gt0.html

  * igt@gem_exec_suspend@basic-s0:
    - shard-dg2:          [INCOMPLETE][236] ([i915#13356]) -> [PASS][237] +2 other tests pass
   [236]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-5/igt@gem_exec_suspend@basic-s0.html
   [237]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@gem_exec_suspend@basic-s0.html

  * igt@gem_exec_suspend@basic-s3:
    - shard-rkl:          [INCOMPLETE][238] ([i915#13356]) -> [PASS][239] +1 other test pass
   [238]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@gem_exec_suspend@basic-s3.html
   [239]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@gem_exec_suspend@basic-s3.html

  * igt@i915_module_load@reload-with-fault-injection:
    - shard-rkl:          [ABORT][240] ([i915#15342]) -> [PASS][241]
   [240]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@i915_module_load@reload-with-fault-injection.html
   [241]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@i915_module_load@reload-with-fault-injection.html

  * igt@kms_async_flips@async-flip-suspend-resume:
    - shard-rkl:          [INCOMPLETE][242] ([i915#12761]) -> [PASS][243] +1 other test pass
   [242]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_async_flips@async-flip-suspend-resume.html
   [243]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_async_flips@async-flip-suspend-resume.html

  * igt@kms_atomic@plane-cursor-legacy:
    - shard-dg1:          [DMESG-WARN][244] ([i915#4423]) -> [PASS][245] +3 other tests pass
   [244]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg1-18/igt@kms_atomic@plane-cursor-legacy.html
   [245]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-13/igt@kms_atomic@plane-cursor-legacy.html

  * igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-mtlp:         [FAIL][246] ([i915#5138]) -> [PASS][247]
   [246]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-mtlp-2/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html
   [247]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-mtlp-5/igt@kms_big_fb@4-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc:
    - shard-rkl:          [INCOMPLETE][248] ([i915#12796]) -> [PASS][249]
   [248]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc.html
   [249]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-2/igt@kms_ccs@crc-primary-suspend-y-tiled-gen12-rc-ccs-cc.html

  * igt@kms_cursor_crc@cursor-sliding-128x42:
    - shard-rkl:          [FAIL][250] ([i915#13566]) -> [PASS][251] +1 other test pass
   [250]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_cursor_crc@cursor-sliding-128x42.html
   [251]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_cursor_crc@cursor-sliding-128x42.html

  * igt@kms_flip@flip-vs-suspend:
    - shard-snb:          [INCOMPLETE][252] ([i915#12314] / [i915#12745] / [i915#4839]) -> [PASS][253]
   [252]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-snb6/igt@kms_flip@flip-vs-suspend.html
   [253]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-snb6/igt@kms_flip@flip-vs-suspend.html

  * igt@kms_flip@flip-vs-suspend@b-hdmi-a1:
    - shard-snb:          [INCOMPLETE][254] ([i915#12314] / [i915#4839]) -> [PASS][255]
   [254]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-snb6/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html
   [255]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-snb6/igt@kms_flip@flip-vs-suspend@b-hdmi-a1.html

  * igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-mmap-cpu:
    - shard-snb:          [SKIP][256] -> [PASS][257]
   [256]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-snb5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-mmap-cpu.html
   [257]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-snb5/igt@kms_frontbuffer_tracking@fbc-2p-scndscrn-cur-indfb-draw-mmap-cpu.html

  * igt@kms_hdr@invalid-metadata-sizes:
    - shard-dg2:          [SKIP][258] ([i915#3555] / [i915#8228]) -> [PASS][259] +1 other test pass
   [258]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-3/igt@kms_hdr@invalid-metadata-sizes.html
   [259]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-11/igt@kms_hdr@invalid-metadata-sizes.html

  * igt@kms_pm_rpm@modeset-non-lpsp:
    - shard-rkl:          [SKIP][260] ([i915#15073]) -> [PASS][261]
   [260]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-5/igt@kms_pm_rpm@modeset-non-lpsp.html
   [261]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-7/igt@kms_pm_rpm@modeset-non-lpsp.html

  
#### Warnings ####

  * igt@gem_ccs@suspend-resume:
    - shard-rkl:          [SKIP][262] ([i915#9323]) -> [SKIP][263] ([i915#14544] / [i915#9323])
   [262]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@gem_ccs@suspend-resume.html
   [263]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_ccs@suspend-resume.html

  * igt@gem_create@create-ext-set-pat:
    - shard-rkl:          [SKIP][264] ([i915#8562]) -> [SKIP][265] ([i915#14544] / [i915#8562])
   [264]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@gem_create@create-ext-set-pat.html
   [265]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_create@create-ext-set-pat.html

  * igt@gem_ctx_sseu@mmap-args:
    - shard-rkl:          [SKIP][266] ([i915#280]) -> [SKIP][267] ([i915#14544] / [i915#280])
   [266]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@gem_ctx_sseu@mmap-args.html
   [267]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_ctx_sseu@mmap-args.html

  * igt@gem_exec_balancer@parallel:
    - shard-rkl:          [SKIP][268] ([i915#4525]) -> [SKIP][269] ([i915#14544] / [i915#4525]) +2 other tests skip
   [268]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@gem_exec_balancer@parallel.html
   [269]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_exec_balancer@parallel.html

  * igt@gem_exec_balancer@parallel-keep-in-fence:
    - shard-rkl:          [SKIP][270] ([i915#14544] / [i915#4525]) -> [SKIP][271] ([i915#4525])
   [270]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_exec_balancer@parallel-keep-in-fence.html
   [271]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_exec_balancer@parallel-keep-in-fence.html

  * igt@gem_exec_reloc@basic-gtt-wc-active:
    - shard-rkl:          [SKIP][272] ([i915#3281]) -> [SKIP][273] ([i915#14544] / [i915#3281]) +5 other tests skip
   [272]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@gem_exec_reloc@basic-gtt-wc-active.html
   [273]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_exec_reloc@basic-gtt-wc-active.html

  * igt@gem_exec_reloc@basic-wc:
    - shard-rkl:          [SKIP][274] ([i915#14544] / [i915#3281]) -> [SKIP][275] ([i915#3281]) +5 other tests skip
   [274]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_exec_reloc@basic-wc.html
   [275]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_exec_reloc@basic-wc.html

  * igt@gem_exec_schedule@semaphore-power:
    - shard-rkl:          [SKIP][276] ([i915#7276]) -> [SKIP][277] ([i915#14544] / [i915#7276])
   [276]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@gem_exec_schedule@semaphore-power.html
   [277]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_exec_schedule@semaphore-power.html

  * igt@gem_lmem_swapping@heavy-verify-random:
    - shard-rkl:          [SKIP][278] ([i915#14544] / [i915#4613]) -> [SKIP][279] ([i915#4613]) +1 other test skip
   [278]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_lmem_swapping@heavy-verify-random.html
   [279]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_lmem_swapping@heavy-verify-random.html

  * igt@gem_lmem_swapping@parallel-random-engines:
    - shard-rkl:          [SKIP][280] ([i915#4613]) -> [SKIP][281] ([i915#14544] / [i915#4613]) +2 other tests skip
   [280]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@gem_lmem_swapping@parallel-random-engines.html
   [281]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_lmem_swapping@parallel-random-engines.html

  * igt@gem_madvise@dontneed-before-pwrite:
    - shard-rkl:          [SKIP][282] ([i915#14544] / [i915#3282]) -> [SKIP][283] ([i915#3282]) +2 other tests skip
   [282]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_madvise@dontneed-before-pwrite.html
   [283]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_madvise@dontneed-before-pwrite.html

  * igt@gem_media_vme:
    - shard-rkl:          [SKIP][284] ([i915#14544] / [i915#284]) -> [SKIP][285] ([i915#284])
   [284]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_media_vme.html
   [285]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_media_vme.html

  * igt@gem_partial_pwrite_pread@reads:
    - shard-rkl:          [SKIP][286] ([i915#3282]) -> [SKIP][287] ([i915#14544] / [i915#3282]) +6 other tests skip
   [286]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@gem_partial_pwrite_pread@reads.html
   [287]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_partial_pwrite_pread@reads.html

  * igt@gem_set_tiling_vs_blt@tiled-to-tiled:
    - shard-rkl:          [SKIP][288] ([i915#8411]) -> [SKIP][289] ([i915#14544] / [i915#8411])
   [288]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@gem_set_tiling_vs_blt@tiled-to-tiled.html
   [289]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_set_tiling_vs_blt@tiled-to-tiled.html

  * igt@gem_set_tiling_vs_blt@tiled-to-untiled:
    - shard-rkl:          [SKIP][290] ([i915#14544] / [i915#8411]) -> [SKIP][291] ([i915#8411]) +1 other test skip
   [290]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_set_tiling_vs_blt@tiled-to-untiled.html
   [291]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_set_tiling_vs_blt@tiled-to-untiled.html

  * igt@gem_userptr_blits@dmabuf-sync:
    - shard-rkl:          [SKIP][292] ([i915#3297] / [i915#3323]) -> [SKIP][293] ([i915#14544] / [i915#3297] / [i915#3323])
   [292]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@gem_userptr_blits@dmabuf-sync.html
   [293]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gem_userptr_blits@dmabuf-sync.html

  * igt@gem_userptr_blits@forbidden-operations:
    - shard-rkl:          [SKIP][294] ([i915#14544] / [i915#3282] / [i915#3297]) -> [SKIP][295] ([i915#3282] / [i915#3297])
   [294]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_userptr_blits@forbidden-operations.html
   [295]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_userptr_blits@forbidden-operations.html

  * igt@gem_userptr_blits@invalid-mmap-offset-unsync:
    - shard-rkl:          [SKIP][296] ([i915#14544] / [i915#3297]) -> [SKIP][297] ([i915#3297])
   [296]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gem_userptr_blits@invalid-mmap-offset-unsync.html
   [297]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gem_userptr_blits@invalid-mmap-offset-unsync.html

  * igt@gen9_exec_parse@batch-without-end:
    - shard-rkl:          [SKIP][298] ([i915#2527]) -> [SKIP][299] ([i915#14544] / [i915#2527]) +2 other tests skip
   [298]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@gen9_exec_parse@batch-without-end.html
   [299]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@gen9_exec_parse@batch-without-end.html

  * igt@gen9_exec_parse@shadow-peek:
    - shard-rkl:          [SKIP][300] ([i915#14544] / [i915#2527]) -> [SKIP][301] ([i915#2527]) +1 other test skip
   [300]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@gen9_exec_parse@shadow-peek.html
   [301]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@gen9_exec_parse@shadow-peek.html

  * igt@i915_module_load@resize-bar:
    - shard-rkl:          [SKIP][302] ([i915#6412]) -> [SKIP][303] ([i915#14544] / [i915#6412])
   [302]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@i915_module_load@resize-bar.html
   [303]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@i915_module_load@resize-bar.html

  * igt@i915_pm_freq_mult@media-freq@gt0:
    - shard-rkl:          [SKIP][304] ([i915#6590]) -> [SKIP][305] ([i915#14544] / [i915#6590]) +1 other test skip
   [304]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@i915_pm_freq_mult@media-freq@gt0.html
   [305]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@i915_pm_freq_mult@media-freq@gt0.html

  * igt@i915_query@test-query-geometry-subslices:
    - shard-rkl:          [SKIP][306] ([i915#5723]) -> [SKIP][307] ([i915#14544] / [i915#5723])
   [306]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@i915_query@test-query-geometry-subslices.html
   [307]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@i915_query@test-query-geometry-subslices.html

  * igt@intel_hwmon@hwmon-read:
    - shard-rkl:          [SKIP][308] ([i915#14544] / [i915#7707]) -> [SKIP][309] ([i915#7707])
   [308]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@intel_hwmon@hwmon-read.html
   [309]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@intel_hwmon@hwmon-read.html

  * igt@intel_hwmon@hwmon-write:
    - shard-rkl:          [SKIP][310] ([i915#7707]) -> [SKIP][311] ([i915#14544] / [i915#7707])
   [310]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@intel_hwmon@hwmon-write.html
   [311]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@intel_hwmon@hwmon-write.html

  * igt@kms_atomic@plane-primary-overlay-mutable-zpos:
    - shard-rkl:          [SKIP][312] ([i915#14544] / [i915#9531]) -> [SKIP][313] ([i915#9531])
   [312]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html
   [313]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_atomic@plane-primary-overlay-mutable-zpos.html

  * igt@kms_big_fb@4-tiled-16bpp-rotate-270:
    - shard-rkl:          [SKIP][314] ([i915#5286]) -> [SKIP][315] ([i915#14544] / [i915#5286]) +2 other tests skip
   [314]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_big_fb@4-tiled-16bpp-rotate-270.html
   [315]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_big_fb@4-tiled-16bpp-rotate-270.html

  * igt@kms_big_fb@4-tiled-32bpp-rotate-90:
    - shard-rkl:          [SKIP][316] ([i915#14544] / [i915#5286]) -> [SKIP][317] ([i915#5286]) +2 other tests skip
   [316]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html
   [317]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_big_fb@4-tiled-32bpp-rotate-90.html

  * igt@kms_big_fb@linear-16bpp-rotate-90:
    - shard-rkl:          [SKIP][318] ([i915#3638]) -> [SKIP][319] ([i915#14544] / [i915#3638]) +1 other test skip
   [318]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_big_fb@linear-16bpp-rotate-90.html
   [319]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_big_fb@linear-16bpp-rotate-90.html

  * igt@kms_big_fb@linear-64bpp-rotate-90:
    - shard-rkl:          [SKIP][320] ([i915#14544] / [i915#3638]) -> [SKIP][321] ([i915#3638]) +1 other test skip
   [320]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_big_fb@linear-64bpp-rotate-90.html
   [321]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_big_fb@linear-64bpp-rotate-90.html

  * igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0:
    - shard-rkl:          [SKIP][322] ([i915#14544]) -> [SKIP][323] +8 other tests skip
   [322]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html
   [323]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_big_fb@yf-tiled-max-hw-stride-64bpp-rotate-0.html

  * igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs:
    - shard-rkl:          [SKIP][324] ([i915#12313]) -> [SKIP][325] ([i915#12313] / [i915#14544])
   [324]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html
   [325]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_ccs@bad-rotation-90-4-tiled-bmg-ccs.html

  * igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-mc-ccs:
    - shard-dg1:          [SKIP][326] ([i915#6095]) -> [SKIP][327] ([i915#4423] / [i915#6095]) +1 other test skip
   [326]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg1-15/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-mc-ccs.html
   [327]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-19/igt@kms_ccs@crc-primary-rotation-180-4-tiled-dg2-mc-ccs.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-2:
    - shard-rkl:          [SKIP][328] ([i915#14544] / [i915#6095]) -> [SKIP][329] ([i915#6095]) +15 other tests skip
   [328]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-2.html
   [329]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-a-hdmi-a-2.html

  * igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-2:
    - shard-rkl:          [SKIP][330] ([i915#14098] / [i915#14544] / [i915#6095]) -> [SKIP][331] ([i915#14098] / [i915#6095]) +17 other tests skip
   [330]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-2.html
   [331]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_ccs@crc-primary-suspend-4-tiled-dg2-rc-ccs-cc@pipe-c-hdmi-a-2.html

  * igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs:
    - shard-rkl:          [SKIP][332] ([i915#14098] / [i915#6095]) -> [SKIP][333] ([i915#14098] / [i915#14544] / [i915#6095]) +17 other tests skip
   [332]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html
   [333]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_ccs@crc-primary-suspend-yf-tiled-ccs.html

  * igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs@pipe-b-hdmi-a-2:
    - shard-rkl:          [SKIP][334] ([i915#6095]) -> [SKIP][335] ([i915#14544] / [i915#6095]) +16 other tests skip
   [334]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs@pipe-b-hdmi-a-2.html
   [335]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_ccs@crc-sprite-planes-basic-4-tiled-dg2-rc-ccs@pipe-b-hdmi-a-2.html

  * igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs:
    - shard-rkl:          [SKIP][336] ([i915#12313] / [i915#14544]) -> [SKIP][337] ([i915#12313])
   [336]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs.html
   [337]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_ccs@random-ccs-data-4-tiled-bmg-ccs.html

  * igt@kms_cdclk@plane-scaling:
    - shard-rkl:          [SKIP][338] ([i915#3742]) -> [SKIP][339] ([i915#14544] / [i915#3742])
   [338]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_cdclk@plane-scaling.html
   [339]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_cdclk@plane-scaling.html

  * igt@kms_chamelium_edid@dp-edid-read:
    - shard-dg1:          [SKIP][340] ([i915#11151] / [i915#7828]) -> [SKIP][341] ([i915#11151] / [i915#4423] / [i915#7828])
   [340]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg1-15/igt@kms_chamelium_edid@dp-edid-read.html
   [341]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-19/igt@kms_chamelium_edid@dp-edid-read.html

  * igt@kms_chamelium_edid@dp-mode-timings:
    - shard-rkl:          [SKIP][342] ([i915#11151] / [i915#14544] / [i915#7828]) -> [SKIP][343] ([i915#11151] / [i915#7828]) +4 other tests skip
   [342]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_chamelium_edid@dp-mode-timings.html
   [343]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_chamelium_edid@dp-mode-timings.html

  * igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode:
    - shard-rkl:          [SKIP][344] ([i915#11151] / [i915#7828]) -> [SKIP][345] ([i915#11151] / [i915#14544] / [i915#7828]) +6 other tests skip
   [344]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode.html
   [345]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_chamelium_hpd@dp-hpd-enable-disable-mode.html

  * igt@kms_content_protection@content-type-change:
    - shard-rkl:          [SKIP][346] ([i915#14544] / [i915#6944] / [i915#9424]) -> [SKIP][347] ([i915#6944] / [i915#9424])
   [346]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_content_protection@content-type-change.html
   [347]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_content_protection@content-type-change.html

  * igt@kms_content_protection@dp-mst-suspend-resume:
    - shard-rkl:          [SKIP][348] ([i915#15330]) -> [SKIP][349] ([i915#14544] / [i915#15330])
   [348]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_content_protection@dp-mst-suspend-resume.html
   [349]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_content_protection@dp-mst-suspend-resume.html

  * igt@kms_content_protection@dp-mst-type-0:
    - shard-rkl:          [SKIP][350] ([i915#14544] / [i915#3116]) -> [SKIP][351] ([i915#3116])
   [350]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_content_protection@dp-mst-type-0.html
   [351]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_content_protection@dp-mst-type-0.html

  * igt@kms_content_protection@lic-type-0:
    - shard-dg2:          [FAIL][352] ([i915#7173]) -> [SKIP][353] ([i915#6944] / [i915#9424])
   [352]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-11/igt@kms_content_protection@lic-type-0.html
   [353]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_content_protection@lic-type-0.html

  * igt@kms_content_protection@suspend-resume:
    - shard-dg2:          [SKIP][354] ([i915#6944]) -> [FAIL][355] ([i915#7173])
   [354]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-6/igt@kms_content_protection@suspend-resume.html
   [355]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-11/igt@kms_content_protection@suspend-resume.html

  * igt@kms_content_protection@type1:
    - shard-dg2:          [SKIP][356] ([i915#6944] / [i915#7118] / [i915#7162] / [i915#9424]) -> [SKIP][357] ([i915#6944] / [i915#7118] / [i915#9424])
   [356]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-11/igt@kms_content_protection@type1.html
   [357]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-3/igt@kms_content_protection@type1.html
    - shard-rkl:          [SKIP][358] ([i915#6944] / [i915#7118] / [i915#9424]) -> [SKIP][359] ([i915#14544] / [i915#6944] / [i915#7118] / [i915#9424]) +1 other test skip
   [358]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_content_protection@type1.html
   [359]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_content_protection@type1.html

  * igt@kms_cursor_legacy@cursora-vs-flipb-legacy:
    - shard-rkl:          [SKIP][360] -> [SKIP][361] ([i915#14544]) +11 other tests skip
   [360]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html
   [361]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_cursor_legacy@cursora-vs-flipb-legacy.html

  * igt@kms_cursor_legacy@cursorb-vs-flipb-legacy:
    - shard-dg1:          [SKIP][362] -> [SKIP][363] ([i915#4423])
   [362]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg1-12/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html
   [363]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-16/igt@kms_cursor_legacy@cursorb-vs-flipb-legacy.html

  * igt@kms_display_modes@extended-mode-basic:
    - shard-rkl:          [SKIP][364] ([i915#13691] / [i915#14544]) -> [SKIP][365] ([i915#13691])
   [364]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_display_modes@extended-mode-basic.html
   [365]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_display_modes@extended-mode-basic.html

  * igt@kms_dp_link_training@uhbr-sst:
    - shard-rkl:          [SKIP][366] ([i915#13748]) -> [SKIP][367] ([i915#13748] / [i915#14544])
   [366]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_dp_link_training@uhbr-sst.html
   [367]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_dp_link_training@uhbr-sst.html

  * igt@kms_dsc@dsc-with-output-formats:
    - shard-rkl:          [SKIP][368] ([i915#3555] / [i915#3840]) -> [SKIP][369] ([i915#14544] / [i915#3555] / [i915#3840])
   [368]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_dsc@dsc-with-output-formats.html
   [369]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_dsc@dsc-with-output-formats.html

  * igt@kms_dsc@dsc-with-output-formats-with-bpc:
    - shard-rkl:          [SKIP][370] ([i915#14544] / [i915#3840] / [i915#9053]) -> [SKIP][371] ([i915#3840] / [i915#9053])
   [370]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_dsc@dsc-with-output-formats-with-bpc.html
   [371]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_dsc@dsc-with-output-formats-with-bpc.html

  * igt@kms_fbcon_fbt@psr:
    - shard-rkl:          [SKIP][372] ([i915#3955]) -> [SKIP][373] ([i915#14544] / [i915#3955])
   [372]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_fbcon_fbt@psr.html
   [373]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_fbcon_fbt@psr.html

  * igt@kms_feature_discovery@display-2x:
    - shard-rkl:          [SKIP][374] ([i915#1839]) -> [SKIP][375] ([i915#14544] / [i915#1839])
   [374]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_feature_discovery@display-2x.html
   [375]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_feature_discovery@display-2x.html

  * igt@kms_flip@2x-dpms-vs-vblank-race:
    - shard-rkl:          [SKIP][376] ([i915#14544] / [i915#9934]) -> [SKIP][377] ([i915#9934]) +4 other tests skip
   [376]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_flip@2x-dpms-vs-vblank-race.html
   [377]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_flip@2x-dpms-vs-vblank-race.html

  * igt@kms_flip@2x-plain-flip:
    - shard-rkl:          [SKIP][378] ([i915#9934]) -> [SKIP][379] ([i915#14544] / [i915#9934]) +7 other tests skip
   [378]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_flip@2x-plain-flip.html
   [379]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_flip@2x-plain-flip.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling:
    - shard-rkl:          [SKIP][380] ([i915#14544] / [i915#2672] / [i915#3555]) -> [SKIP][381] ([i915#2672] / [i915#3555])
   [380]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling.html
   [381]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling.html

  * igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-valid-mode:
    - shard-rkl:          [SKIP][382] ([i915#14544] / [i915#2672]) -> [SKIP][383] ([i915#2672])
   [382]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html
   [383]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_flip_scaled_crc@flip-32bpp-yftile-to-64bpp-yftile-downscaling@pipe-a-valid-mode.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling:
    - shard-rkl:          [SKIP][384] ([i915#2672] / [i915#3555]) -> [SKIP][385] ([i915#14544] / [i915#2672] / [i915#3555]) +5 other tests skip
   [384]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html
   [385]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling.html

  * igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling@pipe-a-valid-mode:
    - shard-rkl:          [SKIP][386] ([i915#2672]) -> [SKIP][387] ([i915#14544] / [i915#2672]) +5 other tests skip
   [386]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling@pipe-a-valid-mode.html
   [387]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_flip_scaled_crc@flip-64bpp-yftile-to-32bpp-yftile-upscaling@pipe-a-valid-mode.html

  * igt@kms_frontbuffer_tracking@fbc-tiling-4:
    - shard-rkl:          [SKIP][388] ([i915#14544] / [i915#5439]) -> [SKIP][389] ([i915#5439])
   [388]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_frontbuffer_tracking@fbc-tiling-4.html
   [389]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_frontbuffer_tracking@fbc-tiling-4.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-cpu:
    - shard-rkl:          [SKIP][390] ([i915#15102]) -> [SKIP][391] ([i915#14544] / [i915#15102]) +2 other tests skip
   [390]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-cpu.html
   [391]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-mmap-cpu.html

  * igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite:
    - shard-rkl:          [SKIP][392] ([i915#14544] / [i915#15102]) -> [SKIP][393] ([i915#15102]) +1 other test skip
   [392]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite.html
   [393]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-1p-offscreen-pri-indfb-draw-pwrite.html

  * igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-blt:
    - shard-rkl:          [SKIP][394] ([i915#1825]) -> [SKIP][395] ([i915#14544] / [i915#1825]) +23 other tests skip
   [394]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-blt.html
   [395]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-2p-scndscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render:
    - shard-dg2:          [SKIP][396] ([i915#10433] / [i915#15102] / [i915#3458]) -> [SKIP][397] ([i915#15102] / [i915#3458]) +2 other tests skip
   [396]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-4/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render.html
   [397]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-8/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render.html
    - shard-rkl:          [SKIP][398] ([i915#14544] / [i915#15102] / [i915#3023]) -> [SKIP][399] ([i915#15102] / [i915#3023]) +9 other tests skip
   [398]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render.html
   [399]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-move:
    - shard-dg2:          [SKIP][400] ([i915#15102] / [i915#3458]) -> [SKIP][401] ([i915#10433] / [i915#15102] / [i915#3458]) +1 other test skip
   [400]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg2-6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-move.html
   [401]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg2-4/igt@kms_frontbuffer_tracking@psr-1p-primscrn-cur-indfb-move.html

  * igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-blt:
    - shard-rkl:          [SKIP][402] ([i915#15102] / [i915#3023]) -> [SKIP][403] ([i915#14544] / [i915#15102] / [i915#3023]) +12 other tests skip
   [402]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-blt.html
   [403]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-1p-primscrn-spr-indfb-draw-blt.html

  * igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt:
    - shard-rkl:          [SKIP][404] ([i915#14544] / [i915#1825]) -> [SKIP][405] ([i915#1825]) +15 other tests skip
   [404]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt.html
   [405]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_frontbuffer_tracking@psr-2p-scndscrn-indfb-msflip-blt.html

  * igt@kms_joiner@basic-force-ultra-joiner:
    - shard-rkl:          [SKIP][406] ([i915#12394]) -> [SKIP][407] ([i915#12394] / [i915#14544])
   [406]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_joiner@basic-force-ultra-joiner.html
   [407]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_joiner@basic-force-ultra-joiner.html

  * igt@kms_joiner@basic-ultra-joiner:
    - shard-rkl:          [SKIP][408] ([i915#12339]) -> [SKIP][409] ([i915#12339] / [i915#14544])
   [408]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_joiner@basic-ultra-joiner.html
   [409]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_joiner@basic-ultra-joiner.html

  * igt@kms_joiner@invalid-modeset-force-big-joiner:
    - shard-rkl:          [SKIP][410] ([i915#10656] / [i915#12388] / [i915#14544]) -> [SKIP][411] ([i915#10656] / [i915#12388])
   [410]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_joiner@invalid-modeset-force-big-joiner.html
   [411]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_joiner@invalid-modeset-force-big-joiner.html

  * igt@kms_joiner@invalid-modeset-ultra-joiner:
    - shard-rkl:          [SKIP][412] ([i915#12339] / [i915#14544]) -> [SKIP][413] ([i915#12339])
   [412]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_joiner@invalid-modeset-ultra-joiner.html
   [413]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_joiner@invalid-modeset-ultra-joiner.html

  * igt@kms_pipe_stress@stress-xrgb8888-yftiled:
    - shard-rkl:          [SKIP][414] ([i915#14712]) -> [SKIP][415] ([i915#14544] / [i915#14712])
   [414]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_pipe_stress@stress-xrgb8888-yftiled.html
   [415]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_pipe_stress@stress-xrgb8888-yftiled.html

  * igt@kms_plane_multiple@2x-tiling-y:
    - shard-rkl:          [SKIP][416] ([i915#13958] / [i915#14544]) -> [SKIP][417] ([i915#13958])
   [416]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_plane_multiple@2x-tiling-y.html
   [417]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_plane_multiple@2x-tiling-y.html

  * igt@kms_plane_multiple@tiling-4:
    - shard-rkl:          [SKIP][418] ([i915#14259]) -> [SKIP][419] ([i915#14259] / [i915#14544])
   [418]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_plane_multiple@tiling-4.html
   [419]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_plane_multiple@tiling-4.html

  * igt@kms_plane_scaling@plane-scaler-unity-scaling-with-rotation@pipe-b:
    - shard-rkl:          [SKIP][420] ([i915#14544] / [i915#15329]) -> [SKIP][421] ([i915#15329]) +3 other tests skip
   [420]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_plane_scaling@plane-scaler-unity-scaling-with-rotation@pipe-b.html
   [421]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_plane_scaling@plane-scaler-unity-scaling-with-rotation@pipe-b.html

  * igt@kms_pm_backlight@fade-with-suspend:
    - shard-rkl:          [SKIP][422] ([i915#14544] / [i915#5354]) -> [SKIP][423] ([i915#5354])
   [422]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_pm_backlight@fade-with-suspend.html
   [423]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_pm_backlight@fade-with-suspend.html

  * igt@kms_pm_lpsp@kms-lpsp:
    - shard-rkl:          [SKIP][424] ([i915#9340]) -> [SKIP][425] ([i915#14544] / [i915#9340])
   [424]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_pm_lpsp@kms-lpsp.html
   [425]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_pm_lpsp@kms-lpsp.html

  * igt@kms_pm_rpm@dpms-lpsp:
    - shard-rkl:          [SKIP][426] ([i915#14544] / [i915#15073]) -> [SKIP][427] ([i915#15073])
   [426]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_pm_rpm@dpms-lpsp.html
   [427]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_pm_rpm@dpms-lpsp.html

  * igt@kms_pm_rpm@package-g7:
    - shard-rkl:          [SKIP][428] ([i915#14544] / [i915#15403]) -> [SKIP][429] ([i915#15403])
   [428]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_pm_rpm@package-g7.html
   [429]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_pm_rpm@package-g7.html

  * igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf:
    - shard-rkl:          [SKIP][430] ([i915#11520] / [i915#14544]) -> [SKIP][431] ([i915#11520]) +2 other tests skip
   [430]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf.html
   [431]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_psr2_sf@fbc-psr2-cursor-plane-move-continuous-exceed-sf.html

  * igt@kms_psr2_sf@psr2-overlay-plane-update-sf-dmg-area:
    - shard-rkl:          [SKIP][432] ([i915#11520]) -> [SKIP][433] ([i915#11520] / [i915#14544]) +6 other tests skip
   [432]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_psr2_sf@psr2-overlay-plane-update-sf-dmg-area.html
   [433]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_psr2_sf@psr2-overlay-plane-update-sf-dmg-area.html

  * igt@kms_psr2_su@page_flip-xrgb8888:
    - shard-rkl:          [SKIP][434] ([i915#9683]) -> [SKIP][435] ([i915#14544] / [i915#9683]) +1 other test skip
   [434]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_psr2_su@page_flip-xrgb8888.html
   [435]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_psr2_su@page_flip-xrgb8888.html

  * igt@kms_psr@fbc-psr-sprite-mmap-cpu:
    - shard-dg1:          [SKIP][436] ([i915#1072] / [i915#9732]) -> [SKIP][437] ([i915#1072] / [i915#4423] / [i915#9732])
   [436]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg1-13/igt@kms_psr@fbc-psr-sprite-mmap-cpu.html
   [437]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-15/igt@kms_psr@fbc-psr-sprite-mmap-cpu.html

  * igt@kms_psr@psr-cursor-mmap-cpu:
    - shard-rkl:          [SKIP][438] ([i915#1072] / [i915#9732]) -> [SKIP][439] ([i915#1072] / [i915#14544] / [i915#9732]) +12 other tests skip
   [438]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_psr@psr-cursor-mmap-cpu.html
   [439]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_psr@psr-cursor-mmap-cpu.html

  * igt@kms_psr@psr-suspend:
    - shard-rkl:          [SKIP][440] ([i915#1072] / [i915#14544] / [i915#9732]) -> [SKIP][441] ([i915#1072] / [i915#9732]) +10 other tests skip
   [440]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_psr@psr-suspend.html
   [441]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_psr@psr-suspend.html

  * igt@kms_psr_stress_test@invalidate-primary-flip-overlay:
    - shard-rkl:          [SKIP][442] ([i915#9685]) -> [SKIP][443] ([i915#14544] / [i915#9685])
   [442]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html
   [443]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_psr_stress_test@invalidate-primary-flip-overlay.html

  * igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270:
    - shard-rkl:          [SKIP][444] ([i915#5289]) -> [SKIP][445] ([i915#14544] / [i915#5289])
   [444]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html
   [445]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_rotation_crc@primary-yf-tiled-reflect-x-270.html

  * igt@kms_setmode@basic-clone-single-crtc:
    - shard-rkl:          [SKIP][446] ([i915#3555]) -> [SKIP][447] ([i915#14544] / [i915#3555])
   [446]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-3/igt@kms_setmode@basic-clone-single-crtc.html
   [447]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_setmode@basic-clone-single-crtc.html

  * igt@kms_setmode@invalid-clone-single-crtc-stealing:
    - shard-rkl:          [SKIP][448] ([i915#14544] / [i915#3555]) -> [SKIP][449] ([i915#3555]) +1 other test skip
   [448]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_setmode@invalid-clone-single-crtc-stealing.html
   [449]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@kms_setmode@invalid-clone-single-crtc-stealing.html

  * igt@kms_vrr@flipline:
    - shard-rkl:          [SKIP][450] ([i915#15243] / [i915#3555]) -> [SKIP][451] ([i915#14544] / [i915#15243] / [i915#3555])
   [450]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@kms_vrr@flipline.html
   [451]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@kms_vrr@flipline.html

  * igt@kms_vrr@negative-basic:
    - shard-dg1:          [SKIP][452] ([i915#3555] / [i915#4423] / [i915#9906]) -> [SKIP][453] ([i915#3555] / [i915#9906])
   [452]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-dg1-19/igt@kms_vrr@negative-basic.html
   [453]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-dg1-14/igt@kms_vrr@negative-basic.html

  * igt@kms_vrr@seamless-rr-switch-vrr:
    - shard-rkl:          [SKIP][454] ([i915#14544] / [i915#9906]) -> [SKIP][455] ([i915#9906])
   [454]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@kms_vrr@seamless-rr-switch-vrr.html
   [455]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-4/igt@kms_vrr@seamless-rr-switch-vrr.html

  * igt@perf@gen8-unprivileged-single-ctx-counters:
    - shard-rkl:          [SKIP][456] ([i915#2436]) -> [SKIP][457] ([i915#14544] / [i915#2436])
   [456]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-7/igt@perf@gen8-unprivileged-single-ctx-counters.html
   [457]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@perf@gen8-unprivileged-single-ctx-counters.html

  * igt@prime_vgem@coherency-gtt:
    - shard-rkl:          [SKIP][458] ([i915#3708]) -> [SKIP][459] ([i915#14544] / [i915#3708])
   [458]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-4/igt@prime_vgem@coherency-gtt.html
   [459]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-6/igt@prime_vgem@coherency-gtt.html

  * igt@sriov_basic@bind-unbind-vf:
    - shard-rkl:          [SKIP][460] ([i915#14544] / [i915#9917]) -> [SKIP][461] ([i915#9917])
   [460]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_17673/shard-rkl-6/igt@sriov_basic@bind-unbind-vf.html
   [461]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/shard-rkl-3/igt@sriov_basic@bind-unbind-vf.html

  
  [i915#10307]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10307
  [i915#10433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10433
  [i915#10434]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10434
  [i915#10647]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10647
  [i915#10656]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/10656
  [i915#1072]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1072
  [i915#11078]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11078
  [i915#11151]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11151
  [i915#11520]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11520
  [i915#11681]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11681
  [i915#118]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/118
  [i915#1187]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1187
  [i915#11920]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/11920
  [i915#12061]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061
  [i915#12169]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12169
  [i915#12276]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12276
  [i915#12313]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12313
  [i915#12314]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12314
  [i915#12339]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12339
  [i915#12358]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12358
  [i915#12388]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12388
  [i915#12392]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12392
  [i915#12394]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12394
  [i915#1257]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1257
  [i915#12713]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12713
  [i915#12745]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12745
  [i915#12755]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12755
  [i915#12761]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12761
  [i915#12796]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12796
  [i915#12910]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12910
  [i915#13008]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13008
  [i915#13029]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13029
  [i915#13046]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13046
  [i915#13049]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13049
  [i915#13179]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13179
  [i915#13356]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13356
  [i915#13427]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13427
  [i915#13522]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13522
  [i915#13566]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13566
  [i915#13691]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13691
  [i915#13707]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13707
  [i915#13748]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13748
  [i915#13749]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13749
  [i915#13958]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13958
  [i915#14098]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14098
  [i915#14152]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14152
  [i915#14259]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14259
  [i915#14419]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14419
  [i915#14498]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14498
  [i915#14544]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14544
  [i915#14545]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14545
  [i915#14712]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14712
  [i915#14809]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14809
  [i915#14888]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14888
  [i915#15073]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15073
  [i915#15095]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15095
  [i915#15102]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15102
  [i915#15140]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15140
  [i915#15243]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15243
  [i915#15305]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15305
  [i915#15329]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15329
  [i915#15330]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15330
  [i915#15342]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15342
  [i915#15385]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15385
  [i915#15403]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/15403
  [i915#1769]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1769
  [i915#1825]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1825
  [i915#1839]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/1839
  [i915#2065]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2065
  [i915#2433]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2433
  [i915#2435]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2435
  [i915#2436]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2436
  [i915#2527]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2527
  [i915#2587]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2587
  [i915#2658]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2658
  [i915#2672]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2672
  [i915#280]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/280
  [i915#284]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/284
  [i915#2856]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/2856
  [i915#3023]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3023
  [i915#3116]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3116
  [i915#3281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3281
  [i915#3282]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3282
  [i915#3291]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3291
  [i915#3297]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3297
  [i915#3299]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3299
  [i915#3323]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3323
  [i915#3458]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3458
  [i915#3469]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3469
  [i915#3555]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3555
  [i915#3637]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3637
  [i915#3638]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3638
  [i915#3708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3708
  [i915#3742]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3742
  [i915#3804]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3804
  [i915#3840]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3840
  [i915#3955]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/3955
  [i915#4077]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4077
  [i915#4083]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4083
  [i915#4103]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4103
  [i915#4270]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4270
  [i915#4281]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4281
  [i915#4349]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4349
  [i915#4387]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4387
  [i915#4423]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4423
  [i915#4525]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4525
  [i915#4538]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4538
  [i915#4613]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4613
  [i915#4817]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4817
  [i915#4839]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4839
  [i915#4854]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/4854
  [i915#5138]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5138
  [i915#5190]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5190
  [i915#5286]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5286
  [i915#5289]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5289
  [i915#5354]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5354
  [i915#5439]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5439
  [i915#5723]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5723
  [i915#5956]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/5956
  [i915#6095]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6095
  [i915#6230]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6230
  [i915#6245]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6245
  [i915#6335]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6335
  [i915#6344]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6344
  [i915#6412]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6412
  [i915#6524]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6524
  [i915#658]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/658
  [i915#6590]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6590
  [i915#6621]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6621
  [i915#6805]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6805
  [i915#6944]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6944
  [i915#6953]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/6953
  [i915#7118]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7118
  [i915#7162]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7162
  [i915#7173]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7173
  [i915#7276]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7276
  [i915#7387]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7387
  [i915#7697]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7697
  [i915#7707]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7707
  [i915#7828]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7828
  [i915#7984]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/7984
  [i915#8228]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8228
  [i915#8399]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8399
  [i915#8411]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8411
  [i915#8428]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8428
  [i915#8516]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8516
  [i915#8562]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8562
  [i915#8708]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/8708
  [i915#9053]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9053
  [i915#9323]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9323
  [i915#9340]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9340
  [i915#9423]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9423
  [i915#9424]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9424
  [i915#9531]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9531
  [i915#9561]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9561
  [i915#9683]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9683
  [i915#9685]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9685
  [i915#9723]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9723
  [i915#9732]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9732
  [i915#9766]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9766
  [i915#9812]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9812
  [i915#9906]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9906
  [i915#9917]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9917
  [i915#9934]: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/9934


Build changes
-------------

  * Linux: CI_DRM_17673 -> Patchwork_158180v3

  CI-20190529: 20190529
  CI_DRM_17673: 90eba5e4087d6932c174f97637833862c9f9ec25 @ git://anongit.freedesktop.org/gfx-ci/linux
  IGT_8664: 28cc709ad89c0ef569569f19f4772d4cca354963 @ https://gitlab.freedesktop.org/drm/igt-gpu-tools.git
  Patchwork_158180v3: 90eba5e4087d6932c174f97637833862c9f9ec25 @ git://anongit.freedesktop.org/gfx-ci/linux
  piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit

== Logs ==

For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_158180v3/index.html

[-- Attachment #2: Type: text/html, Size: 159036 bytes --]

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH v2 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config()
  2025-12-12 18:17   ` [PATCH v2 " Imre Deak
@ 2025-12-15  6:06     ` Hogander, Jouni
  0 siblings, 0 replies; 137+ messages in thread
From: Hogander, Jouni @ 2025-12-15  6:06 UTC (permalink / raw)
  To: intel-xe@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	Deak, Imre

On Fri, 2025-12-12 at 20:17 +0200, Imre Deak wrote:
> Add intel_dp_dsc_get_slice_config() to compute the detailed slice
> configuration and determine the slices-per-line value (returned by
> intel_dp_dsc_get_slice_count()) using this function.
> 
> v2: Fix incorrectly returning false from
> intel_dp_dsc_min_slice_count()
>     due to rebase fail. (Jouni)

Reviewed-by: Jouni Högander <jouni.hogander@intel.com>

> 
> Cc: Jouni Högander <jouni.hogander@intel.com>
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 31 ++++++++++++++++++++---
> --
>  1 file changed, 25 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c
> b/drivers/gpu/drm/i915/display/intel_dp.c
> index ff776bf3b0366..1808020877d19 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1018,9 +1018,11 @@ static int intel_dp_dsc_min_slice_count(const
> struct intel_connector *connector,
>  	return min_slice_count;
>  }
>  
> -u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
> -				int mode_clock, int mode_hdisplay,
> -				int num_joined_pipes)
> +static bool
> +intel_dp_dsc_get_slice_config(const struct intel_connector
> *connector,
> +			      int mode_clock, int mode_hdisplay,
> +			      int num_joined_pipes,
> +			      struct intel_dsc_slice_config
> *config_ret)
>  {
>  	struct intel_display *display = to_intel_display(connector);
>  	int min_slice_count =
> @@ -1057,8 +1059,11 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		if (mode_hdisplay % slices_per_line)
>  			continue;
>  
> -		if (min_slice_count <= slices_per_line)
> -			return slices_per_line;
> +		if (min_slice_count <= slices_per_line) {
> +			*config_ret = config;
> +
> +			return true;
> +		}
>  	}
>  
>  	/* Print slice count 1,2,4,..24 if bit#0,1,3,..23 is set in
> the mask. */
> @@ -1069,7 +1074,21 @@ u8 intel_dp_dsc_get_slice_count(const struct
> intel_connector *connector,
>  		    min_slice_count,
>  		    (int)BITS_PER_TYPE(sink_slice_count_mask),
> &sink_slice_count_mask);
>  
> -	return 0;
> +	return false;
> +}
> +
> +u8 intel_dp_dsc_get_slice_count(const struct intel_connector
> *connector,
> +				int mode_clock, int mode_hdisplay,
> +				int num_joined_pipes)
> +{
> +	struct intel_dsc_slice_config config;
> +
> +	if (!intel_dp_dsc_get_slice_config(connector,
> +					   mode_clock,
> mode_hdisplay,
> +					   num_joined_pipes,
> &config))
> +		return 0;
> +
> +	return intel_dsc_line_slice_count(&config);
>  }
>  
>  static bool source_can_output(struct intel_dp *intel_dp,


^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  2025-11-27 17:49 ` [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp() Imre Deak
  2025-12-12 15:41   ` Govindapillai, Vinod
@ 2025-12-15  7:46   ` Luca Coelho
  2025-12-15 11:53     ` Imre Deak
  1 sibling, 1 reply; 137+ messages in thread
From: Luca Coelho @ 2025-12-15  7:46 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out align_max_sink_dsc_input_bpp(), also used later for computing
> the maximum DSC input BPP limit.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++---------
>  1 file changed, 18 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 000fccc39a292..dcb9bc11e677b 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -1893,12 +1893,27 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
>  	return intel_dp_dsc_min_src_input_bpc();
>  }
>  
> +static int align_max_sink_dsc_input_bpp(const struct intel_connector *connector,
> +					int max_pipe_bpp)
> +{
> +	u8 dsc_bpc[3];

I think it's safer to use the '= {}' we had before, because that zeroes
the array, so in case of any stack leaks, you won't leak aleatory parts
of the memory.  In this case it's only 3 bytes, so hardly anything
important could leak, but anyway.

Also, since this is 3 bytes long, it's theoretically better to have it
at the end of the stack declarations.

> +	int num_bpc;
> +	int i;
> +
> +	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> +						       dsc_bpc);
> +	for (i = 0; i < num_bpc; i++) {
> +		if (dsc_bpc[i] * 3 <= max_pipe_bpp)
> +			return dsc_bpc[i] * 3;
> +	}
> +
> +	return 0;
> +}
> +
>  int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
>  				 u8 max_req_bpc)
>  {
>  	struct intel_display *display = to_intel_display(connector);
> -	int i, num_bpc;
> -	u8 dsc_bpc[3] = {};
>  	int dsc_max_bpc;
>  
>  	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
> @@ -1908,14 +1923,7 @@ int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
>  
>  	dsc_max_bpc = min(dsc_max_bpc, max_req_bpc);
>  
> -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> -						       dsc_bpc);
> -	for (i = 0; i < num_bpc; i++) {
> -		if (dsc_max_bpc >= dsc_bpc[i])
> -			return dsc_bpc[i] * 3;
> -	}
> -
> -	return 0;
> +	return align_max_sink_dsc_input_bpp(connector, dsc_max_bpc * 3);
>  }
>  
>  static int intel_dp_source_dsc_version_minor(struct intel_display *display)

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  2025-11-27 17:49 ` [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16() Imre Deak
  2025-12-12 15:46   ` Govindapillai, Vinod
@ 2025-12-15  7:49   ` Luca Coelho
  2025-12-15 12:00     ` Imre Deak
  1 sibling, 1 reply; 137+ messages in thread
From: Luca Coelho @ 2025-12-15  7:49 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Factor out align_max_vesa_compressed_bpp_x16(), also used later for
> computing the maximum DSC compressed BPP limit.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 35 ++++++++++++++-----------
>  1 file changed, 20 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index dcb9bc11e677b..3111758578d6c 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -867,10 +867,23 @@ small_joiner_ram_size_bits(struct intel_display *display)
>  		return 6144 * 8;
>  }
>  
> +static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
> +{
> +	int i;
> +
> +	for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
> +		int vesa_bpp_x16 = fxp_q4_from_int(valid_dsc_bpp[i]);

Any reason why you're doing the loop from the end to the beginning,
instead of the more natural from 0 to the end?

I think this is clearer and less prone to mistakes:

	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {...}


> +
> +		if (vesa_bpp_x16 <= max_link_bpp_x16)
> +			return vesa_bpp_x16;
> +	}
> +
> +	return 0;
> +}
> +
>  static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp, u32 pipe_bpp)
>  {
>  	u32 bits_per_pixel = bpp;
> -	int i;
>  
>  	/* Error out if the max bpp is less than smallest allowed valid bpp */
>  	if (bits_per_pixel < valid_dsc_bpp[0]) {
> @@ -899,15 +912,13 @@ static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp
>  		}
>  		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
>  	} else {
> +		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
> +
>  		/* Find the nearest match in the array of known BPPs from VESA */
> -		for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
> -			if (bits_per_pixel < valid_dsc_bpp[i + 1])
> -				break;
> -		}
> -		drm_dbg_kms(display->drm, "Set dsc bpp from %d to VESA %d\n",
> -			    bits_per_pixel, valid_dsc_bpp[i]);
> +		link_bpp_x16 = align_max_vesa_compressed_bpp_x16(link_bpp_x16);
>  
> -		bits_per_pixel = valid_dsc_bpp[i];
> +		drm_WARN_ON(display->drm, fxp_q4_to_frac(link_bpp_x16));
> +		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
>  	}
>  
>  	return bits_per_pixel;
> @@ -2219,7 +2230,6 @@ int intel_dp_dsc_bpp_step_x16(const struct intel_connector *connector)
>  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
>  {
>  	struct intel_display *display = to_intel_display(intel_dp);
> -	int i;
>  
>  	if (DISPLAY_VER(display) >= 13) {
>  		if (intel_dp->force_dsc_fractional_bpp_en && !fxp_q4_to_frac(bpp_x16))
> @@ -2231,12 +2241,7 @@ bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
>  	if (fxp_q4_to_frac(bpp_x16))
>  		return false;
>  
> -	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
> -		if (fxp_q4_to_int(bpp_x16) == valid_dsc_bpp[i])
> -			return true;
> -	}
> -
> -	return false;
> +	return align_max_vesa_compressed_bpp_x16(bpp_x16) == bpp_x16;
>  }
>  
>  /*

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values
  2025-11-27 17:49 ` [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values Imre Deak
  2025-12-12 15:48   ` Govindapillai, Vinod
@ 2025-12-15  7:51   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-15  7:51 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> Make sure that state computation fails if the minimum/maximum link BPP
> values got invalid as a result of limiting both of these values
> separately to the corresponding source/sink capability limits.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 3111758578d6c..545d872a30403 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2654,7 +2654,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  	limits->link.max_bpp_x16 = max_link_bpp_x16;
>  
>  	drm_dbg_kms(display->drm,
> -		    "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits: pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d max link_bpp " FXP_Q4_FMT "\n",
> +		    "[ENCODER:%d:%s][CRTC:%d:%s] DP link limits: pixel clock %d kHz DSC %s max lanes %d max rate %d max pipe_bpp %d min link_bpp " FXP_Q4_FMT " max link_bpp " FXP_Q4_FMT "\n",
>  		    encoder->base.base.id, encoder->base.name,
>  		    crtc->base.base.id, crtc->base.name,
>  		    adjusted_mode->crtc_clock,
> @@ -2662,8 +2662,13 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  		    limits->max_lane_count,
>  		    limits->max_rate,
>  		    limits->pipe.max_bpp,
> +		    FXP_Q4_ARGS(limits->link.min_bpp_x16),
>  		    FXP_Q4_ARGS(limits->link.max_bpp_x16));
>  
> +	if (limits->link.min_bpp_x16 <= 0 ||
> +	    limits->link.min_bpp_x16 > limits->link.max_bpp_x16)
> +		return false;
> +
>  	return true;
>  }
>  

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value
  2025-11-27 17:49 ` [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value Imre Deak
  2025-12-12 15:51   ` Govindapillai, Vinod
@ 2025-12-15  7:51   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-15  7:51 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> There is no reason to accept a minimum/maximum link BPP value above the
> maximum throughput BPP value, fail the state computation in this case.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 2 --
>  1 file changed, 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 545d872a30403..f97ee8265836a 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2638,8 +2638,6 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));
>  
>  		throughput_max_bpp_x16 = dsc_throughput_quirk_max_bpp_x16(connector, crtc_state);
> -		throughput_max_bpp_x16 = clamp(throughput_max_bpp_x16,
> -					       limits->link.min_bpp_x16, max_link_bpp_x16);
>  		if (throughput_max_bpp_x16 < max_link_bpp_x16) {
>  			max_link_bpp_x16 = throughput_max_bpp_x16;
>  

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed BPP value
  2025-11-27 17:49 ` [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed " Imre Deak
  2025-12-12 15:52   ` Govindapillai, Vinod
@ 2025-12-15  7:52   ` Luca Coelho
  1 sibling, 0 replies; 137+ messages in thread
From: Luca Coelho @ 2025-12-15  7:52 UTC (permalink / raw)
  To: Imre Deak, intel-gfx, intel-xe

On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> There is no reason to accept an invalid maximum sink compressed BPP
> value (i.e. 0), fail the state computation in this case.
> 
> Signed-off-by: Imre Deak <imre.deak@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_dp.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index f97ee8265836a..db7e49c17ca8d 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2631,8 +2631,7 @@ intel_dp_compute_config_link_bpp_limits(struct intel_dp *intel_dp,
>  		dsc_sink_max_bpp = intel_dp_dsc_sink_max_compressed_bpp(connector,
>  									crtc_state,
>  									limits->pipe.max_bpp / 3);
> -		dsc_max_bpp = dsc_sink_max_bpp ?
> -			      min(dsc_sink_max_bpp, dsc_src_max_bpp) : dsc_src_max_bpp;
> +		dsc_max_bpp = min(dsc_sink_max_bpp, dsc_src_max_bpp);
>  		dsc_max_bpp = min(dsc_max_bpp, joiner_max_bpp);
>  
>  		max_link_bpp_x16 = min(max_link_bpp_x16, fxp_q4_from_int(dsc_max_bpp));

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  2025-12-15  7:46   ` Luca Coelho
@ 2025-12-15 11:53     ` Imre Deak
  2025-12-15 12:02       ` Luca Coelho
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-15 11:53 UTC (permalink / raw)
  To: Luca Coelho; +Cc: intel-gfx, intel-xe

On Mon, Dec 15, 2025 at 09:46:24AM +0200, Luca Coelho wrote:
> On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > Factor out align_max_sink_dsc_input_bpp(), also used later for computing
> > the maximum DSC input BPP limit.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++---------
> >  1 file changed, 18 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > index 000fccc39a292..dcb9bc11e677b 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -1893,12 +1893,27 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
> >  	return intel_dp_dsc_min_src_input_bpc();
> >  }
> >  
> > +static int align_max_sink_dsc_input_bpp(const struct intel_connector *connector,
> > +					int max_pipe_bpp)
> > +{
> > +	u8 dsc_bpc[3];
> 
> I think it's safer to use the '= {}' we had before, because that zeroes
> the array, so in case of any stack leaks, you won't leak aleatory parts
> of the memory.  In this case it's only 3 bytes, so hardly anything
> important could leak, but anyway.

As for any other variable I don't see any reason for initializing it, if
it will be initialized before its first use. It will be initialized
before its first use by drm_dp_dsc_sink_supported_input_bpcs().

> Also, since this is 3 bytes long, it's theoretically better to have it
> at the end of the stack declarations.

The compiler is free to reorder the allocation order on the stack and
is expected to that for optimal alignment.

> > +	int num_bpc;
> > +	int i;
> > +
> > +	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> > +						       dsc_bpc);
> > +	for (i = 0; i < num_bpc; i++) {
> > +		if (dsc_bpc[i] * 3 <= max_pipe_bpp)
> > +			return dsc_bpc[i] * 3;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> >  int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
> >  				 u8 max_req_bpc)
> >  {
> >  	struct intel_display *display = to_intel_display(connector);
> > -	int i, num_bpc;
> > -	u8 dsc_bpc[3] = {};
> >  	int dsc_max_bpc;
> >  
> >  	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
> > @@ -1908,14 +1923,7 @@ int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
> >  
> >  	dsc_max_bpc = min(dsc_max_bpc, max_req_bpc);
> >  
> > -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> > -						       dsc_bpc);
> > -	for (i = 0; i < num_bpc; i++) {
> > -		if (dsc_max_bpc >= dsc_bpc[i])
> > -			return dsc_bpc[i] * 3;
> > -	}
> > -
> > -	return 0;
> > +	return align_max_sink_dsc_input_bpp(connector, dsc_max_bpc * 3);
> >  }
> >  
> >  static int intel_dp_source_dsc_version_minor(struct intel_display *display)

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  2025-12-15  7:49   ` Luca Coelho
@ 2025-12-15 12:00     ` Imre Deak
  2025-12-15 12:08       ` Luca Coelho
  0 siblings, 1 reply; 137+ messages in thread
From: Imre Deak @ 2025-12-15 12:00 UTC (permalink / raw)
  To: Luca Coelho; +Cc: intel-gfx, intel-xe

On Mon, Dec 15, 2025 at 09:49:45AM +0200, Luca Coelho wrote:
> On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > Factor out align_max_vesa_compressed_bpp_x16(), also used later for
> > computing the maximum DSC compressed BPP limit.
> > 
> > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > ---
> >  drivers/gpu/drm/i915/display/intel_dp.c | 35 ++++++++++++++-----------
> >  1 file changed, 20 insertions(+), 15 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > index dcb9bc11e677b..3111758578d6c 100644
> > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > @@ -867,10 +867,23 @@ small_joiner_ram_size_bits(struct intel_display *display)
> >  		return 6144 * 8;
> >  }
> >  
> > +static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
> > +{
> > +	int i;
> > +
> > +	for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
> > +		int vesa_bpp_x16 = fxp_q4_from_int(valid_dsc_bpp[i]);
> 
> Any reason why you're doing the loop from the end to the beginning,
> instead of the more natural from 0 to the end?

Yes. The values in valid_dsc_bpp[] are stored in increasing order, so to
find the maximum value <= the passed-in limit, the natural iteration
order is from the end of the array.

> I think this is clearer and less prone to mistakes:
> 
> 	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {...}
> 
> 
> > +
> > +		if (vesa_bpp_x16 <= max_link_bpp_x16)
> > +			return vesa_bpp_x16;
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> >  static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp, u32 pipe_bpp)
> >  {
> >  	u32 bits_per_pixel = bpp;
> > -	int i;
> >  
> >  	/* Error out if the max bpp is less than smallest allowed valid bpp */
> >  	if (bits_per_pixel < valid_dsc_bpp[0]) {
> > @@ -899,15 +912,13 @@ static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp
> >  		}
> >  		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
> >  	} else {
> > +		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
> > +
> >  		/* Find the nearest match in the array of known BPPs from VESA */
> > -		for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
> > -			if (bits_per_pixel < valid_dsc_bpp[i + 1])
> > -				break;
> > -		}
> > -		drm_dbg_kms(display->drm, "Set dsc bpp from %d to VESA %d\n",
> > -			    bits_per_pixel, valid_dsc_bpp[i]);
> > +		link_bpp_x16 = align_max_vesa_compressed_bpp_x16(link_bpp_x16);
> >  
> > -		bits_per_pixel = valid_dsc_bpp[i];
> > +		drm_WARN_ON(display->drm, fxp_q4_to_frac(link_bpp_x16));
> > +		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
> >  	}
> >  
> >  	return bits_per_pixel;
> > @@ -2219,7 +2230,6 @@ int intel_dp_dsc_bpp_step_x16(const struct intel_connector *connector)
> >  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
> >  {
> >  	struct intel_display *display = to_intel_display(intel_dp);
> > -	int i;
> >  
> >  	if (DISPLAY_VER(display) >= 13) {
> >  		if (intel_dp->force_dsc_fractional_bpp_en && !fxp_q4_to_frac(bpp_x16))
> > @@ -2231,12 +2241,7 @@ bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
> >  	if (fxp_q4_to_frac(bpp_x16))
> >  		return false;
> >  
> > -	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
> > -		if (fxp_q4_to_int(bpp_x16) == valid_dsc_bpp[i])
> > -			return true;
> > -	}
> > -
> > -	return false;
> > +	return align_max_vesa_compressed_bpp_x16(bpp_x16) == bpp_x16;
> >  }
> >  
> >  /*

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  2025-12-15 11:53     ` Imre Deak
@ 2025-12-15 12:02       ` Luca Coelho
  2025-12-15 12:33         ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Luca Coelho @ 2025-12-15 12:02 UTC (permalink / raw)
  To: imre.deak; +Cc: intel-gfx, intel-xe

On Mon, 2025-12-15 at 13:53 +0200, Imre Deak wrote:
> On Mon, Dec 15, 2025 at 09:46:24AM +0200, Luca Coelho wrote:
> > On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > > Factor out align_max_sink_dsc_input_bpp(), also used later for computing
> > > the maximum DSC input BPP limit.
> > > 
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++---------
> > >  1 file changed, 18 insertions(+), 10 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > > index 000fccc39a292..dcb9bc11e677b 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > @@ -1893,12 +1893,27 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
> > >  	return intel_dp_dsc_min_src_input_bpc();
> > >  }
> > >  
> > > +static int align_max_sink_dsc_input_bpp(const struct intel_connector *connector,
> > > +					int max_pipe_bpp)
> > > +{
> > > +	u8 dsc_bpc[3];
> > 
> > I think it's safer to use the '= {}' we had before, because that zeroes
> > the array, so in case of any stack leaks, you won't leak aleatory parts
> > of the memory.  In this case it's only 3 bytes, so hardly anything
> > important could leak, but anyway.
> 
> As for any other variable I don't see any reason for initializing it, if
> it will be initialized before its first use. It will be initialized
> before its first use by drm_dp_dsc_sink_supported_input_bpcs().

Fair enough.  Security here is probably not so important, and as I
said, it's only 3 bytes, but in wifi we once had the activity of pre-
initializing all arrays like this for security reasons.  Your call.


> > Also, since this is 3 bytes long, it's theoretically better to have it
> > at the end of the stack declarations.
> 
> The compiler is free to reorder the allocation order on the stack and
> is expected to that for optimal alignment.

Of course the compiler will do this sort of things, but it's just
better practice IMHO to keeps organized in some way.  If you had said
that it was in alphabetical order (it isn't), then it would probably
satisfy my OCD. lol

In any case, these were just nitpicks, so it's up to you.

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.



> > > +	int num_bpc;
> > > +	int i;
> > > +
> > > +	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> > > +						       dsc_bpc);
> > > +	for (i = 0; i < num_bpc; i++) {
> > > +		if (dsc_bpc[i] * 3 <= max_pipe_bpp)
> > > +			return dsc_bpc[i] * 3;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > >  int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
> > >  				 u8 max_req_bpc)
> > >  {
> > >  	struct intel_display *display = to_intel_display(connector);
> > > -	int i, num_bpc;
> > > -	u8 dsc_bpc[3] = {};
> > >  	int dsc_max_bpc;
> > >  
> > >  	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
> > > @@ -1908,14 +1923,7 @@ int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
> > >  
> > >  	dsc_max_bpc = min(dsc_max_bpc, max_req_bpc);
> > >  
> > > -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> > > -						       dsc_bpc);
> > > -	for (i = 0; i < num_bpc; i++) {
> > > -		if (dsc_max_bpc >= dsc_bpc[i])
> > > -			return dsc_bpc[i] * 3;
> > > -	}
> > > -
> > > -	return 0;
> > > +	return align_max_sink_dsc_input_bpp(connector, dsc_max_bpc * 3);
> > >  }
> > >  
> > >  static int intel_dp_source_dsc_version_minor(struct intel_display *display)

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  2025-12-15 12:00     ` Imre Deak
@ 2025-12-15 12:08       ` Luca Coelho
  2025-12-15 12:24         ` Imre Deak
  0 siblings, 1 reply; 137+ messages in thread
From: Luca Coelho @ 2025-12-15 12:08 UTC (permalink / raw)
  To: imre.deak; +Cc: intel-gfx, intel-xe

On Mon, 2025-12-15 at 14:00 +0200, Imre Deak wrote:
> On Mon, Dec 15, 2025 at 09:49:45AM +0200, Luca Coelho wrote:
> > On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > > Factor out align_max_vesa_compressed_bpp_x16(), also used later for
> > > computing the maximum DSC compressed BPP limit.
> > > 
> > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > ---
> > >  drivers/gpu/drm/i915/display/intel_dp.c | 35 ++++++++++++++-----------
> > >  1 file changed, 20 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > > index dcb9bc11e677b..3111758578d6c 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > @@ -867,10 +867,23 @@ small_joiner_ram_size_bits(struct intel_display *display)
> > >  		return 6144 * 8;
> > >  }
> > >  
> > > +static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
> > > +{
> > > +	int i;
> > > +
> > > +	for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
> > > +		int vesa_bpp_x16 = fxp_q4_from_int(valid_dsc_bpp[i]);
> > 
> > Any reason why you're doing the loop from the end to the beginning,
> > instead of the more natural from 0 to the end?
> 
> Yes. The values in valid_dsc_bpp[] are stored in increasing order, so to
> find the maximum value <= the passed-in limit, the natural iteration
> order is from the end of the array.

I don't really see how this affects anything functionally and by
"natural" I meant for the person reading the code.  I had to think a
bit deeper when reviewing this loop because it's not the "for (i = 0; i
< ARRAY_SIZE(...); i++)" format I'm mostly used to.

Anyway, another nitpick with not functional issues, so:

Reviewed-by: Luca Coelho <luciano.coelho@intel.com>

--
Cheers,
Luca.



> > I think this is clearer and less prone to mistakes:
> > 
> > 	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {...}
> > 
> > 
> > > +
> > > +		if (vesa_bpp_x16 <= max_link_bpp_x16)
> > > +			return vesa_bpp_x16;
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +
> > >  static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp, u32 pipe_bpp)
> > >  {
> > >  	u32 bits_per_pixel = bpp;
> > > -	int i;
> > >  
> > >  	/* Error out if the max bpp is less than smallest allowed valid bpp */
> > >  	if (bits_per_pixel < valid_dsc_bpp[0]) {
> > > @@ -899,15 +912,13 @@ static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp
> > >  		}
> > >  		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
> > >  	} else {
> > > +		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
> > > +
> > >  		/* Find the nearest match in the array of known BPPs from VESA */
> > > -		for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
> > > -			if (bits_per_pixel < valid_dsc_bpp[i + 1])
> > > -				break;
> > > -		}
> > > -		drm_dbg_kms(display->drm, "Set dsc bpp from %d to VESA %d\n",
> > > -			    bits_per_pixel, valid_dsc_bpp[i]);
> > > +		link_bpp_x16 = align_max_vesa_compressed_bpp_x16(link_bpp_x16);
> > >  
> > > -		bits_per_pixel = valid_dsc_bpp[i];
> > > +		drm_WARN_ON(display->drm, fxp_q4_to_frac(link_bpp_x16));
> > > +		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
> > >  	}
> > >  
> > >  	return bits_per_pixel;
> > > @@ -2219,7 +2230,6 @@ int intel_dp_dsc_bpp_step_x16(const struct intel_connector *connector)
> > >  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
> > >  {
> > >  	struct intel_display *display = to_intel_display(intel_dp);
> > > -	int i;
> > >  
> > >  	if (DISPLAY_VER(display) >= 13) {
> > >  		if (intel_dp->force_dsc_fractional_bpp_en && !fxp_q4_to_frac(bpp_x16))
> > > @@ -2231,12 +2241,7 @@ bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
> > >  	if (fxp_q4_to_frac(bpp_x16))
> > >  		return false;
> > >  
> > > -	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
> > > -		if (fxp_q4_to_int(bpp_x16) == valid_dsc_bpp[i])
> > > -			return true;
> > > -	}
> > > -
> > > -	return false;
> > > +	return align_max_vesa_compressed_bpp_x16(bpp_x16) == bpp_x16;
> > >  }
> > >  
> > >  /*

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16()
  2025-12-15 12:08       ` Luca Coelho
@ 2025-12-15 12:24         ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-15 12:24 UTC (permalink / raw)
  To: Luca Coelho; +Cc: intel-gfx, intel-xe

On Mon, Dec 15, 2025 at 02:08:04PM +0200, Luca Coelho wrote:
> On Mon, 2025-12-15 at 14:00 +0200, Imre Deak wrote:
> > On Mon, Dec 15, 2025 at 09:49:45AM +0200, Luca Coelho wrote:
> > > On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > > > Factor out align_max_vesa_compressed_bpp_x16(), also used later for
> > > > computing the maximum DSC compressed BPP limit.
> > > > 
> > > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/display/intel_dp.c | 35 ++++++++++++++-----------
> > > >  1 file changed, 20 insertions(+), 15 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > > > index dcb9bc11e677b..3111758578d6c 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > > @@ -867,10 +867,23 @@ small_joiner_ram_size_bits(struct intel_display *display)
> > > >  		return 6144 * 8;
> > > >  }
> > > >  
> > > > +static int align_max_vesa_compressed_bpp_x16(int max_link_bpp_x16)
> > > > +{
> > > > +	int i;
> > > > +
> > > > +	for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
> > > > +		int vesa_bpp_x16 = fxp_q4_from_int(valid_dsc_bpp[i]);
> > > 
> > > Any reason why you're doing the loop from the end to the beginning,
> > > instead of the more natural from 0 to the end?
> > 
> > Yes. The values in valid_dsc_bpp[] are stored in increasing order, so to
> > find the maximum value <= the passed-in limit, the natural iteration
> > order is from the end of the array.
> 
> I don't really see how this affects anything functionally and by
> "natural" I meant for the person reading the code.  I had to think a
> bit deeper when reviewing this loop because it's not the "for (i = 0; i
> < ARRAY_SIZE(...); i++)" format I'm mostly used to.

Yes, I also meant more natural from the reviewer POV. The forward
iteration wouldn't be obvious for three reasons: it iterates only
through ARRAY_SIZE()-1 elements not ARRAY_SIZE elemts, in iteration "i"
it must check the array element at index i+1 not at index i and it also
depends an extra check outside of the loop to return 0 if even the first
element of the array is above the passed-in limit. 

> Anyway, another nitpick with not functional issues, so:
> 
> Reviewed-by: Luca Coelho <luciano.coelho@intel.com>
> 
> --
> Cheers,
> Luca.
> 
> 
> 
> > > I think this is clearer and less prone to mistakes:
> > > 
> > > 	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {...}
> > > 
> > > 
> > > > +
> > > > +		if (vesa_bpp_x16 <= max_link_bpp_x16)
> > > > +			return vesa_bpp_x16;
> > > > +	}
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > >  static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp, u32 pipe_bpp)
> > > >  {
> > > >  	u32 bits_per_pixel = bpp;
> > > > -	int i;
> > > >  
> > > >  	/* Error out if the max bpp is less than smallest allowed valid bpp */
> > > >  	if (bits_per_pixel < valid_dsc_bpp[0]) {
> > > > @@ -899,15 +912,13 @@ static u32 intel_dp_dsc_nearest_valid_bpp(struct intel_display *display, u32 bpp
> > > >  		}
> > > >  		bits_per_pixel = min_t(u32, bits_per_pixel, 27);
> > > >  	} else {
> > > > +		int link_bpp_x16 = fxp_q4_from_int(bits_per_pixel);
> > > > +
> > > >  		/* Find the nearest match in the array of known BPPs from VESA */
> > > > -		for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp) - 1; i++) {
> > > > -			if (bits_per_pixel < valid_dsc_bpp[i + 1])
> > > > -				break;
> > > > -		}
> > > > -		drm_dbg_kms(display->drm, "Set dsc bpp from %d to VESA %d\n",
> > > > -			    bits_per_pixel, valid_dsc_bpp[i]);
> > > > +		link_bpp_x16 = align_max_vesa_compressed_bpp_x16(link_bpp_x16);
> > > >  
> > > > -		bits_per_pixel = valid_dsc_bpp[i];
> > > > +		drm_WARN_ON(display->drm, fxp_q4_to_frac(link_bpp_x16));
> > > > +		bits_per_pixel = fxp_q4_to_int(link_bpp_x16);
> > > >  	}
> > > >  
> > > >  	return bits_per_pixel;
> > > > @@ -2219,7 +2230,6 @@ int intel_dp_dsc_bpp_step_x16(const struct intel_connector *connector)
> > > >  bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
> > > >  {
> > > >  	struct intel_display *display = to_intel_display(intel_dp);
> > > > -	int i;
> > > >  
> > > >  	if (DISPLAY_VER(display) >= 13) {
> > > >  		if (intel_dp->force_dsc_fractional_bpp_en && !fxp_q4_to_frac(bpp_x16))
> > > > @@ -2231,12 +2241,7 @@ bool intel_dp_dsc_valid_compressed_bpp(struct intel_dp *intel_dp, int bpp_x16)
> > > >  	if (fxp_q4_to_frac(bpp_x16))
> > > >  		return false;
> > > >  
> > > > -	for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
> > > > -		if (fxp_q4_to_int(bpp_x16) == valid_dsc_bpp[i])
> > > > -			return true;
> > > > -	}
> > > > -
> > > > -	return false;
> > > > +	return align_max_vesa_compressed_bpp_x16(bpp_x16) == bpp_x16;
> > > >  }
> > > >  
> > > >  /*

^ permalink raw reply	[flat|nested] 137+ messages in thread

* Re: [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp()
  2025-12-15 12:02       ` Luca Coelho
@ 2025-12-15 12:33         ` Imre Deak
  0 siblings, 0 replies; 137+ messages in thread
From: Imre Deak @ 2025-12-15 12:33 UTC (permalink / raw)
  To: Luca Coelho; +Cc: intel-gfx, intel-xe

On Mon, Dec 15, 2025 at 02:02:07PM +0200, Luca Coelho wrote:
> On Mon, 2025-12-15 at 13:53 +0200, Imre Deak wrote:
> > On Mon, Dec 15, 2025 at 09:46:24AM +0200, Luca Coelho wrote:
> > > On Thu, 2025-11-27 at 19:49 +0200, Imre Deak wrote:
> > > > Factor out align_max_sink_dsc_input_bpp(), also used later for computing
> > > > the maximum DSC input BPP limit.
> > > > 
> > > > Signed-off-by: Imre Deak <imre.deak@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++++---------
> > > >  1 file changed, 18 insertions(+), 10 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> > > > index 000fccc39a292..dcb9bc11e677b 100644
> > > > --- a/drivers/gpu/drm/i915/display/intel_dp.c
> > > > +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> > > > @@ -1893,12 +1893,27 @@ int intel_dp_dsc_max_src_input_bpc(struct intel_display *display)
> > > >  	return intel_dp_dsc_min_src_input_bpc();
> > > >  }
> > > >  
> > > > +static int align_max_sink_dsc_input_bpp(const struct intel_connector *connector,
> > > > +					int max_pipe_bpp)
> > > > +{
> > > > +	u8 dsc_bpc[3];
> > > 
> > > I think it's safer to use the '= {}' we had before, because that zeroes
> > > the array, so in case of any stack leaks, you won't leak aleatory parts
> > > of the memory.  In this case it's only 3 bytes, so hardly anything
> > > important could leak, but anyway.
> > 
> > As for any other variable I don't see any reason for initializing it, if
> > it will be initialized before its first use. It will be initialized
> > before its first use by drm_dp_dsc_sink_supported_input_bpcs().
> 
> Fair enough.  Security here is probably not so important, and as I
> said, it's only 3 bytes, but in wifi we once had the activity of pre-
> initializing all arrays like this for security reasons.  Your call.

I don't see how it is more secure. I think any valid reason to zero out
variables on the stack for security reasons would need to be a guideline
explained and mandated in the whole kernel ubiquitously and should not
be considered as an opt-in practice. I'm not aware of such a guideline.

> > > Also, since this is 3 bytes long, it's theoretically better to have it
> > > at the end of the stack declarations.
> > 
> > The compiler is free to reorder the allocation order on the stack and
> > is expected to that for optimal alignment.
> 
> Of course the compiler will do this sort of things, but it's just
> better practice IMHO to keeps organized in some way.  If you had said
> that it was in alphabetical order (it isn't), then it would probably
> satisfy my OCD. lol

The ordering rule I follow is the readability of declarations, which is
better if it's in decreasing line length order.

> In any case, these were just nitpicks, so it's up to you.
> 
> Reviewed-by: Luca Coelho <luciano.coelho@intel.com>
> 
> --
> Cheers,
> Luca.
> 
> 
> 
> > > > +	int num_bpc;
> > > > +	int i;
> > > > +
> > > > +	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> > > > +						       dsc_bpc);
> > > > +	for (i = 0; i < num_bpc; i++) {
> > > > +		if (dsc_bpc[i] * 3 <= max_pipe_bpp)
> > > > +			return dsc_bpc[i] * 3;
> > > > +	}
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +
> > > >  int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
> > > >  				 u8 max_req_bpc)
> > > >  {
> > > >  	struct intel_display *display = to_intel_display(connector);
> > > > -	int i, num_bpc;
> > > > -	u8 dsc_bpc[3] = {};
> > > >  	int dsc_max_bpc;
> > > >  
> > > >  	dsc_max_bpc = intel_dp_dsc_max_src_input_bpc(display);
> > > > @@ -1908,14 +1923,7 @@ int intel_dp_dsc_compute_max_bpp(const struct intel_connector *connector,
> > > >  
> > > >  	dsc_max_bpc = min(dsc_max_bpc, max_req_bpc);
> > > >  
> > > > -	num_bpc = drm_dp_dsc_sink_supported_input_bpcs(connector->dp.dsc_dpcd,
> > > > -						       dsc_bpc);
> > > > -	for (i = 0; i < num_bpc; i++) {
> > > > -		if (dsc_max_bpc >= dsc_bpc[i])
> > > > -			return dsc_bpc[i] * 3;
> > > > -	}
> > > > -
> > > > -	return 0;
> > > > +	return align_max_sink_dsc_input_bpp(connector, dsc_max_bpc * 3);
> > > >  }
> > > >  
> > > >  static int intel_dp_source_dsc_version_minor(struct intel_display *display)

^ permalink raw reply	[flat|nested] 137+ messages in thread

end of thread, other threads:[~2025-12-15 12:33 UTC | newest]

Thread overview: 137+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-27 17:49 [PATCH 00/50] drm/i915/dp: Clean up link BW/DSC slice config computation Imre Deak
2025-11-27 17:49 ` [PATCH 01/50] drm/dp: Parse all DSC slice count caps for eDP 1.5 Imre Deak
2025-12-08 11:24   ` Luca Coelho
2025-12-08 12:36     ` Imre Deak
2025-11-27 17:49 ` [PATCH 02/50] drm/dp: Add drm_dp_dsc_sink_slice_count_mask() Imre Deak
2025-12-09  8:48   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 03/50] drm/i915/dp: Fix DSC sink's slice count capability check Imre Deak
2025-12-09  8:51   ` Luca Coelho
2025-12-09  9:53     ` Imre Deak
2025-12-09 11:14       ` Luca Coelho
2025-11-27 17:49 ` [PATCH 04/50] drm/i915/dp: Return a fixed point BPP value from intel_dp_output_bpp() Imre Deak
2025-12-09  9:10   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 05/50] drm/i915/dp: Use a mode's crtc_clock vs. clock during state computation Imre Deak
2025-12-09 12:51   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 06/50] drm/i915/dp: Factor out intel_dp_link_bw_overhead() Imre Deak
2025-12-09 12:52   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 07/50] drm/i915/dp: Fix BW check in is_bw_sufficient_for_dsc_config() Imre Deak
2025-12-09 12:53   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 08/50] drm/i915/dp: Use the effective data rate for DP BW calculation Imre Deak
2025-12-10 12:48   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 09/50] drm/i915/dp: Use the effective data rate for DP compressed " Imre Deak
2025-12-10 12:50   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 10/50] drm/i915/dp: Account with MST, SSC BW overhead for uncompressed DP-MST stream BW Imre Deak
2025-12-10 13:08   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 11/50] drm/i915/dp: Account with DSC BW overhead for compressed DP-SST " Imre Deak
2025-12-10 13:39   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 12/50] drm/i915/dp: Account with pipe joiner max compressed BPP limit for DP-MST and eDP Imre Deak
2025-12-10 14:29   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 13/50] drm/i915/dp: Drop unused timeslots param from dsc_compute_link_config() Imre Deak
2025-12-10 14:31   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 14/50] drm/i915/dp: Factor out align_max_sink_dsc_input_bpp() Imre Deak
2025-12-12 15:41   ` Govindapillai, Vinod
2025-12-15  7:46   ` Luca Coelho
2025-12-15 11:53     ` Imre Deak
2025-12-15 12:02       ` Luca Coelho
2025-12-15 12:33         ` Imre Deak
2025-11-27 17:49 ` [PATCH 15/50] drm/i915/dp: Factor out align_max_vesa_compressed_bpp_x16() Imre Deak
2025-12-12 15:46   ` Govindapillai, Vinod
2025-12-15  7:49   ` Luca Coelho
2025-12-15 12:00     ` Imre Deak
2025-12-15 12:08       ` Luca Coelho
2025-12-15 12:24         ` Imre Deak
2025-11-27 17:49 ` [PATCH 16/50] drm/i915/dp: Fail state computation for invalid min/max link BPP values Imre Deak
2025-12-12 15:48   ` Govindapillai, Vinod
2025-12-15  7:51   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 17/50] drm/i915/dp: Fail state computation for invalid max throughput BPP value Imre Deak
2025-12-12 15:51   ` Govindapillai, Vinod
2025-12-15  7:51   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 18/50] drm/i915/dp: Fail state computation for invalid max sink compressed " Imre Deak
2025-12-12 15:52   ` Govindapillai, Vinod
2025-12-15  7:52   ` Luca Coelho
2025-11-27 17:49 ` [PATCH 19/50] drm/i915/dp: Fail state computation for invalid DSC source input BPP values Imre Deak
2025-12-11  8:29   ` Govindapillai, Vinod
2025-11-27 17:49 ` [PATCH 20/50] drm/i915/dp: Align min/max DSC input BPPs to sink caps Imre Deak
2025-12-11  8:51   ` Govindapillai, Vinod
2025-11-27 17:49 ` [PATCH 21/50] drm/i915/dp: Align min/max compressed BPPs when calculating BPP limits Imre Deak
2025-12-12  9:17   ` Govindapillai, Vinod
2025-12-12 11:09     ` Imre Deak
2025-11-27 17:49 ` [PATCH 22/50] drm/i915/dp: Drop intel_dp parameter from intel_dp_compute_config_link_bpp_limits() Imre Deak
2025-12-12  9:23   ` Govindapillai, Vinod
2025-11-27 17:49 ` [PATCH 23/50] drm/i915/dp: Pass intel_output_format to intel_dp_dsc_sink_{min_max}_compressed_bpp() Imre Deak
2025-12-12  9:27   ` Govindapillai, Vinod
2025-11-27 17:49 ` [PATCH 24/50] drm/i915/dp: Pass mode clock to dsc_throughput_quirk_max_bpp_x16() Imre Deak
2025-12-12  9:31   ` Govindapillai, Vinod
2025-11-27 17:49 ` [PATCH 25/50] drm/i915/dp: Factor out compute_min_compressed_bpp_x16() Imre Deak
2025-12-12  9:39   ` Govindapillai, Vinod
2025-12-12 11:01     ` Imre Deak
2025-12-12 11:41       ` Govindapillai, Vinod
2025-11-27 17:49 ` [PATCH 26/50] drm/i915/dp: Factor out compute_max_compressed_bpp_x16() Imre Deak
2025-12-12  9:50   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 27/50] drm/i915/dp: Add intel_dp_mode_valid_with_dsc() Imre Deak
2025-12-12 11:43   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 28/50] drm/i915/dp: Unify detect and compute time DSC mode BW validation Imre Deak
2025-12-12 14:29   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 29/50] drm/i915/dp: Use helpers to align min/max compressed BPPs Imre Deak
2025-12-12 14:34   ` Govindapillai, Vinod
2025-12-12 14:39   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 30/50] drm/i915/dp: Simplify computing DSC BPPs for eDP Imre Deak
2025-12-12 14:45   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 31/50] drm/i915/dp: Simplify computing DSC BPPs for DP-SST Imre Deak
2025-12-12 14:59   ` Govindapillai, Vinod
2025-12-12 18:41     ` Imre Deak
2025-11-27 17:50 ` [PATCH 32/50] drm/i915/dp: Simplify computing forced DSC BPP " Imre Deak
2025-12-12 15:21   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 33/50] drm/i915/dp: Unify computing compressed BPP for DP-SST and eDP Imre Deak
2025-12-12 15:38   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 34/50] drm/i915/dp: Simplify eDP vs. DP compressed BPP computation Imre Deak
2025-12-12 15:39   ` Govindapillai, Vinod
2025-11-27 17:50 ` [PATCH 35/50] drm/i915/dp: Simplify computing the DSC compressed BPP for DP-MST Imre Deak
2025-12-08 13:08   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 36/50] drm/i915/dsc: Track the detaild DSC slice configuration Imre Deak
2025-12-09  8:24   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 37/50] drm/i915/dsc: Track the DSC stream count in the DSC slice config state Imre Deak
2025-12-09  8:28   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 38/50] drm/i915/dsi: Move initialization of DSI DSC streams-per-pipe to fill_dsc() Imre Deak
2025-12-09  8:47   ` Hogander, Jouni
2025-12-09 10:38     ` Imre Deak
2025-12-09 11:37       ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 39/50] drm/i915/dsi: Track the detailed DSC slice configuration Imre Deak
2025-12-09 12:43   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 40/50] drm/i915/dp: " Imre Deak
2025-12-09 14:06   ` Hogander, Jouni
2025-12-09 14:30     ` Imre Deak
2025-12-09 17:50       ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 41/50] drm/i915/dsc: Switch to using intel_dsc_line_slice_count() Imre Deak
2025-12-09 17:14   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 42/50] drm/i915/dp: Factor out intel_dp_dsc_min_slice_count() Imre Deak
2025-12-09 17:26   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 43/50] drm/i915/dp: Use int for DSC slice count variables Imre Deak
2025-12-09 17:30   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 44/50] drm/i915/dp: Rename test_slice_count to slices_per_line Imre Deak
2025-12-09 17:34   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 45/50] drm/i915/dp: Simplify the DSC slice config loop's slices-per-pipe iteration Imre Deak
2025-12-10 12:38   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 46/50] drm/i915/dsc: Add intel_dsc_get_slice_config() Imre Deak
2025-12-10 14:06   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 47/50] drm/i915/dsi: Use intel_dsc_get_slice_config() Imre Deak
2025-12-10 14:44   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 48/50] drm/i915/dp: Unify DP and eDP slice count computation Imre Deak
2025-12-11  6:48   ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 49/50] drm/i915/dp: Add intel_dp_dsc_get_slice_config() Imre Deak
2025-12-11  6:55   ` Hogander, Jouni
2025-12-11  9:52     ` Imre Deak
2025-12-12 18:17   ` [PATCH v2 " Imre Deak
2025-12-15  6:06     ` Hogander, Jouni
2025-11-27 17:50 ` [PATCH 50/50] drm/i915/dp: Use intel_dp_dsc_get_slice_config() Imre Deak
2025-12-11  6:59   ` Hogander, Jouni
2025-12-11 10:23     ` Imre Deak
2025-12-12 18:03       ` Imre Deak
2025-12-12 18:17   ` [PATCH v2 " Imre Deak
2025-11-28 16:20 ` [CI 09/50] drm/i915/dp: Use the effective data rate for DP compressed BW calculation Imre Deak
2025-12-12 13:23   ` Govindapillai, Vinod
2025-11-28 18:48 ` ✗ i915.CI.BAT: failure for drm/i915/dp: Clean up link BW/DSC slice config computation Patchwork
2025-11-28 20:49   ` Imre Deak
2025-12-01  9:46 ` ✗ i915.CI.Full: " Patchwork
2025-12-12 20:01 ` ✓ i915.CI.BAT: success for drm/i915/dp: Clean up link BW/DSC slice config computation (rev3) Patchwork
2025-12-13  4:00 ` ✓ i915.CI.Full: " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).