Intel-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/13] drm/1915: skl+ watermark/latency stuff
@ 2025-09-05 14:51 Ville Syrjala
  2025-09-05 14:52 ` [PATCH 01/13] drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips Ville Syrjala
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Ville Syrjala @ 2025-09-05 14:51 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

A bunch of claenup to the watermark latency setup on skl+, and
a few potential fixes for some edge cases.

Ville Syrjälä (13):
  drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
  drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
  drm/i915: Tweak the read latency fixup code
  drm/i915: Don't pass the latency array to {skl,mtl}_read_wm_latency()
  drm/i915: Move adjust_wm_latency() out from
    {mtl,skl}_read_wm_latency()
  drm/i915: Extract multiply_wm_latency() from skl_read_wm_latency()
  drm/i915: Extract increase_wm_latency()
  drm/i915: Use increase_wm_latency() for the 16Gb DIMM w/a
  drm/i915: Extract sanitize_wm_latency()
  drm/i915: Flatten sanitize_wm_latency() a bit
  drm/i915: Make wm latencies monotonic
  drm/i915: Print both the original and adjusted wm latencies
  drm/i915: Make sure wm block/lines are non-decreasing

 drivers/gpu/drm/i915/display/skl_watermark.c | 160 +++++++++++++------
 drivers/gpu/drm/i915/soc/intel_dram.c        |  10 +-
 2 files changed, 118 insertions(+), 52 deletions(-)

-- 
2.49.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH 01/13] drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
  2025-09-05 14:51 [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjala
@ 2025-09-05 14:52 ` Ville Syrjala
  2025-09-05 14:52 ` [PATCH 02/13] drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it Ville Syrjala
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Ville Syrjala @ 2025-09-05 14:52 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

While the spec only asks us to do the WM0 latency bump for 16Gb
DRAM devices I believe we should apply it for larger DRAM chips.
At the time the w/a was added there were no larger chips on
the market, but I think I've seen at least 32Gb DDR4 chips
being available these days.

Whether it's possible to actually find suitable DIMMs for the
affected systems with largers chips I don't know. Also it's
not known whether the 1 usec latency bump would be sufficient
for larger chips. Someone would need to find such DIMMs and
test this. Fortunately we do have a bit of extra latency already
with the 1 usec bump, as the actual requirement was .4 usec for
for 16Gb chips.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/skl_watermark.c |  4 ++--
 drivers/gpu/drm/i915/soc/intel_dram.c        | 10 +++++-----
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c
index 0b9c86042e78..b7482d428868 100644
--- a/drivers/gpu/drm/i915/display/skl_watermark.c
+++ b/drivers/gpu/drm/i915/display/skl_watermark.c
@@ -3209,9 +3209,9 @@ adjust_wm_latency(struct intel_display *display,
 	}
 
 	/*
-	 * WA Level-0 adjustment for 16Gb DIMMs: SKL+
+	 * WA Level-0 adjustment for 16Gb+ DIMMs: SKL+
 	 * If we could not get dimm info enable this WA to prevent from
-	 * any underrun. If not able to get DIMM info assume 16Gb DIMM
+	 * any underrun. If not able to get DIMM info assume 16Gb+ DIMM
 	 * to avoid any underrun.
 	 */
 	if (!display->platform.dg2 && dram_info->has_16gb_dimms)
diff --git a/drivers/gpu/drm/i915/soc/intel_dram.c b/drivers/gpu/drm/i915/soc/intel_dram.c
index 149527827624..8e81573022ff 100644
--- a/drivers/gpu/drm/i915/soc/intel_dram.c
+++ b/drivers/gpu/drm/i915/soc/intel_dram.c
@@ -335,7 +335,7 @@ static bool
 skl_is_16gb_dimm(const struct dram_dimm_info *dimm)
 {
 	/* Convert total Gb to Gb per DRAM device */
-	return dimm->size / (intel_dimm_num_devices(dimm) ?: 1) == 16;
+	return dimm->size / (intel_dimm_num_devices(dimm) ?: 1) >= 16;
 }
 
 static void
@@ -354,7 +354,7 @@ skl_dram_get_dimm_info(struct drm_i915_private *i915,
 	}
 
 	drm_dbg_kms(&i915->drm,
-		    "CH%u DIMM %c size: %u Gb, width: X%u, ranks: %u, 16Gb DIMMs: %s\n",
+		    "CH%u DIMM %c size: %u Gb, width: X%u, ranks: %u, 16Gb+ DIMMs: %s\n",
 		    channel, dimm_name, dimm->size, dimm->width, dimm->ranks,
 		    str_yes_no(skl_is_16gb_dimm(dimm)));
 }
@@ -384,7 +384,7 @@ skl_dram_get_channel_info(struct drm_i915_private *i915,
 	ch->is_16gb_dimm = skl_is_16gb_dimm(&ch->dimm_l) ||
 		skl_is_16gb_dimm(&ch->dimm_s);
 
-	drm_dbg_kms(&i915->drm, "CH%u ranks: %u, 16Gb DIMMs: %s\n",
+	drm_dbg_kms(&i915->drm, "CH%u ranks: %u, 16Gb+ DIMMs: %s\n",
 		    channel, ch->ranks, str_yes_no(ch->is_16gb_dimm));
 
 	return 0;
@@ -406,7 +406,7 @@ skl_dram_get_channels_info(struct drm_i915_private *i915, struct dram_info *dram
 	u32 val;
 	int ret;
 
-	/* Assume 16Gb DIMMs are present until proven otherwise */
+	/* Assume 16Gb+ DIMMs are present until proven otherwise */
 	dram_info->has_16gb_dimms = true;
 
 	val = intel_uncore_read(&i915->uncore,
@@ -438,7 +438,7 @@ skl_dram_get_channels_info(struct drm_i915_private *i915, struct dram_info *dram
 	drm_dbg_kms(&i915->drm, "Memory configuration is symmetric? %s\n",
 		    str_yes_no(dram_info->symmetric_memory));
 
-	drm_dbg_kms(&i915->drm, "16Gb DIMMs: %s\n",
+	drm_dbg_kms(&i915->drm, "16Gb+ DIMMs: %s\n",
 		    str_yes_no(dram_info->has_16gb_dimms));
 
 	return 0;
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 02/13] drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
  2025-09-05 14:51 [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjala
  2025-09-05 14:52 ` [PATCH 01/13] drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips Ville Syrjala
@ 2025-09-05 14:52 ` Ville Syrjala
  2025-09-05 14:52 ` [PATCH 03/13] drm/i915: Tweak the read latency fixup code Ville Syrjala
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Ville Syrjala @ 2025-09-05 14:52 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

Currently the code assumes that every platform except dg2 need the
16Gb DIMM w/a, while in reality it's only needed by skl and icl (and
derivatives). Switch to a more specific platform check.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/skl_watermark.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c
index b7482d428868..8c434bc96971 100644
--- a/drivers/gpu/drm/i915/display/skl_watermark.c
+++ b/drivers/gpu/drm/i915/display/skl_watermark.c
@@ -3174,11 +3174,19 @@ void skl_watermark_ipc_init(struct intel_display *display)
 	skl_watermark_ipc_update(display);
 }
 
+static bool need_16gb_dimm_wa(struct intel_display *display)
+{
+	const struct dram_info *dram_info = intel_dram_info(display->drm);
+
+	return (display->platform.skylake || display->platform.kabylake ||
+		display->platform.coffeelake || display->platform.cometlake ||
+		DISPLAY_VER(display) == 11) && dram_info->has_16gb_dimms;
+}
+
 static void
 adjust_wm_latency(struct intel_display *display,
 		  u16 wm[], int num_levels, int read_latency)
 {
-	const struct dram_info *dram_info = intel_dram_info(display->drm);
 	int i, level;
 
 	/*
@@ -3214,7 +3222,7 @@ adjust_wm_latency(struct intel_display *display,
 	 * any underrun. If not able to get DIMM info assume 16Gb+ DIMM
 	 * to avoid any underrun.
 	 */
-	if (!display->platform.dg2 && dram_info->has_16gb_dimms)
+	if (need_16gb_dimm_wa(display))
 		wm[0] += 1;
 }
 
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 03/13] drm/i915: Tweak the read latency fixup code
  2025-09-05 14:51 [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjala
  2025-09-05 14:52 ` [PATCH 01/13] drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips Ville Syrjala
  2025-09-05 14:52 ` [PATCH 02/13] drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it Ville Syrjala
@ 2025-09-05 14:52 ` Ville Syrjala
  2025-09-05 14:52 ` [PATCH 04/13] drm/i915: Don't pass the latency array to {skl, mtl}_read_wm_latency() Ville Syrjala
  2025-09-05 15:01 ` [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjälä
  4 siblings, 0 replies; 7+ messages in thread
From: Ville Syrjala @ 2025-09-05 14:52 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

If WM0 latency is zero we need to bump it (and the WM1+ latencies)
but a fixed amount. But any WM1+ level with zero latency must
not be touched since that indicates that corresponding WM level
isn't supported.

Currently the loop doing that adjustment does work, but only because
the previous loop modified the num_levels used as the loop boundary.
This all seems a bit too fragile. Remove the num_levels adjustment
and instead adjust the read latency loop to abort when it encounters
a zero latency value.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/skl_watermark.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c
index 8c434bc96971..805481c92154 100644
--- a/drivers/gpu/drm/i915/display/skl_watermark.c
+++ b/drivers/gpu/drm/i915/display/skl_watermark.c
@@ -3198,8 +3198,6 @@ adjust_wm_latency(struct intel_display *display,
 		if (wm[level] == 0) {
 			for (i = level + 1; i < num_levels; i++)
 				wm[i] = 0;
-
-			num_levels = level;
 			break;
 		}
 	}
@@ -3212,8 +3210,14 @@ adjust_wm_latency(struct intel_display *display,
 	 * from the punit when level 0 response data is 0us.
 	 */
 	if (wm[0] == 0) {
-		for (level = 0; level < num_levels; level++)
+		wm[0] += read_latency;
+
+		for (level = 1; level < num_levels; level++) {
+			if (wm[level] == 0)
+				break;
+
 			wm[level] += read_latency;
+		}
 	}
 
 	/*
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 04/13] drm/i915: Don't pass the latency array to {skl, mtl}_read_wm_latency()
  2025-09-05 14:51 [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjala
                   ` (2 preceding siblings ...)
  2025-09-05 14:52 ` [PATCH 03/13] drm/i915: Tweak the read latency fixup code Ville Syrjala
@ 2025-09-05 14:52 ` Ville Syrjala
  2025-09-05 15:01 ` [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjälä
  4 siblings, 0 replies; 7+ messages in thread
From: Ville Syrjala @ 2025-09-05 14:52 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

We always operate on i915->display.wm.skl_latency in
{skl,mtl}_read_wm_latency(). No real need for the caller
to have to pass that in explicitly.

Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
 drivers/gpu/drm/i915/display/skl_watermark.c | 21 ++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/skl_watermark.c b/drivers/gpu/drm/i915/display/skl_watermark.c
index 805481c92154..9797c2131334 100644
--- a/drivers/gpu/drm/i915/display/skl_watermark.c
+++ b/drivers/gpu/drm/i915/display/skl_watermark.c
@@ -3184,9 +3184,10 @@ static bool need_16gb_dimm_wa(struct intel_display *display)
 }
 
 static void
-adjust_wm_latency(struct intel_display *display,
-		  u16 wm[], int num_levels, int read_latency)
+adjust_wm_latency(struct intel_display *display, int read_latency)
 {
+	u16 *wm = display->wm.skl_latency;
+	int num_levels = display->wm.num_levels;
 	int i, level;
 
 	/*
@@ -3230,9 +3231,9 @@ adjust_wm_latency(struct intel_display *display,
 		wm[0] += 1;
 }
 
-static void mtl_read_wm_latency(struct intel_display *display, u16 wm[])
+static void mtl_read_wm_latency(struct intel_display *display)
 {
-	int num_levels = display->wm.num_levels;
+	u16 *wm = display->wm.skl_latency;
 	u32 val;
 
 	val = intel_de_read(display, MTL_LATENCY_LP0_LP1);
@@ -3247,12 +3248,12 @@ static void mtl_read_wm_latency(struct intel_display *display, u16 wm[])
 	wm[4] = REG_FIELD_GET(MTL_LATENCY_LEVEL_EVEN_MASK, val);
 	wm[5] = REG_FIELD_GET(MTL_LATENCY_LEVEL_ODD_MASK, val);
 
-	adjust_wm_latency(display, wm, num_levels, 6);
+	adjust_wm_latency(display, 6);
 }
 
-static void skl_read_wm_latency(struct intel_display *display, u16 wm[])
+static void skl_read_wm_latency(struct intel_display *display)
 {
-	int num_levels = display->wm.num_levels;
+	u16 *wm = display->wm.skl_latency;
 	int read_latency = DISPLAY_VER(display) >= 12 ? 3 : 2;
 	int mult = display->platform.dg2 ? 2 : 1;
 	u32 val;
@@ -3284,7 +3285,7 @@ static void skl_read_wm_latency(struct intel_display *display, u16 wm[])
 	wm[6] = REG_FIELD_GET(GEN9_MEM_LATENCY_LEVEL_2_6_MASK, val) * mult;
 	wm[7] = REG_FIELD_GET(GEN9_MEM_LATENCY_LEVEL_3_7_MASK, val) * mult;
 
-	adjust_wm_latency(display, wm, num_levels, read_latency);
+	adjust_wm_latency(display, read_latency);
 }
 
 static void skl_setup_wm_latency(struct intel_display *display)
@@ -3295,9 +3296,9 @@ static void skl_setup_wm_latency(struct intel_display *display)
 		display->wm.num_levels = 8;
 
 	if (DISPLAY_VER(display) >= 14)
-		mtl_read_wm_latency(display, display->wm.skl_latency);
+		mtl_read_wm_latency(display);
 	else
-		skl_read_wm_latency(display, display->wm.skl_latency);
+		skl_read_wm_latency(display);
 
 	intel_print_wm_latency(display, "Gen9 Plane", display->wm.skl_latency);
 }
-- 
2.49.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH 00/13] drm/1915: skl+ watermark/latency stuff
@ 2025-09-05 14:58 Ville Syrjala
  0 siblings, 0 replies; 7+ messages in thread
From: Ville Syrjala @ 2025-09-05 14:58 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

From: Ville Syrjälä <ville.syrjala@linux.intel.com>

A bunch of claenup to the watermark latency setup on skl+, and
a few potential fixes for some edge cases.

Ville Syrjälä (13):
  drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
  drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
  drm/i915: Tweak the read latency fixup code
  drm/i915: Don't pass the latency array to {skl,mtl}_read_wm_latency()
  drm/i915: Move adjust_wm_latency() out from
    {mtl,skl}_read_wm_latency()
  drm/i915: Extract multiply_wm_latency() from skl_read_wm_latency()
  drm/i915: Extract increase_wm_latency()
  drm/i915: Use increase_wm_latency() for the 16Gb DIMM w/a
  drm/i915: Extract sanitize_wm_latency()
  drm/i915: Flatten sanitize_wm_latency() a bit
  drm/i915: Make wm latencies monotonic
  drm/i915: Print both the original and adjusted wm latencies
  drm/i915: Make sure wm block/lines are non-decreasing

 drivers/gpu/drm/i915/display/skl_watermark.c | 160 +++++++++++++------
 drivers/gpu/drm/i915/soc/intel_dram.c        |  10 +-
 2 files changed, 118 insertions(+), 52 deletions(-)

-- 
2.49.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH 00/13] drm/1915: skl+ watermark/latency stuff
  2025-09-05 14:51 [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjala
                   ` (3 preceding siblings ...)
  2025-09-05 14:52 ` [PATCH 04/13] drm/i915: Don't pass the latency array to {skl, mtl}_read_wm_latency() Ville Syrjala
@ 2025-09-05 15:01 ` Ville Syrjälä
  4 siblings, 0 replies; 7+ messages in thread
From: Ville Syrjälä @ 2025-09-05 15:01 UTC (permalink / raw)
  To: intel-gfx; +Cc: intel-xe

On Fri, Sep 05, 2025 at 05:51:59PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala@linux.intel.com>
> 
> A bunch of claenup to the watermark latency setup on skl+, and
> a few potential fixes for some edge cases.

Please ignore this incomplete series. Apparently gnome-keyring has some
decades old bug where it randomly fails, and this time it happened in
the middle of posting this series :(

> 
> Ville Syrjälä (13):
>   drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
>   drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
>   drm/i915: Tweak the read latency fixup code
>   drm/i915: Don't pass the latency array to {skl,mtl}_read_wm_latency()
>   drm/i915: Move adjust_wm_latency() out from
>     {mtl,skl}_read_wm_latency()
>   drm/i915: Extract multiply_wm_latency() from skl_read_wm_latency()
>   drm/i915: Extract increase_wm_latency()
>   drm/i915: Use increase_wm_latency() for the 16Gb DIMM w/a
>   drm/i915: Extract sanitize_wm_latency()
>   drm/i915: Flatten sanitize_wm_latency() a bit
>   drm/i915: Make wm latencies monotonic
>   drm/i915: Print both the original and adjusted wm latencies
>   drm/i915: Make sure wm block/lines are non-decreasing
> 
>  drivers/gpu/drm/i915/display/skl_watermark.c | 160 +++++++++++++------
>  drivers/gpu/drm/i915/soc/intel_dram.c        |  10 +-
>  2 files changed, 118 insertions(+), 52 deletions(-)
> 
> -- 
> 2.49.1

-- 
Ville Syrjälä
Intel

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-09-05 15:01 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-05 14:51 [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjala
2025-09-05 14:52 ` [PATCH 01/13] drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips Ville Syrjala
2025-09-05 14:52 ` [PATCH 02/13] drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it Ville Syrjala
2025-09-05 14:52 ` [PATCH 03/13] drm/i915: Tweak the read latency fixup code Ville Syrjala
2025-09-05 14:52 ` [PATCH 04/13] drm/i915: Don't pass the latency array to {skl, mtl}_read_wm_latency() Ville Syrjala
2025-09-05 15:01 ` [PATCH 00/13] drm/1915: skl+ watermark/latency stuff Ville Syrjälä
  -- strict thread matches above, loose matches on Subject: below --
2025-09-05 14:58 Ville Syrjala

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox