amd-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] DC Patches July 30, 2025
@ 2025-07-30 18:58 Roman.Li
  2025-07-30 18:58 ` [PATCH 01/12] drm/amd/display: fix a Null pointer dereference vulnerability Roman.Li
                   ` (12 more replies)
  0 siblings, 13 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Roman Li

From: Roman Li <Roman.Li@amd.com>

This DC patchset brings improvements in multiple areas. In summary, we highlight:

* Add interface to log hw state when underflow happens
* Fix hubp programming of 3dlut fast load
* Avoid Read Remote DPCD Many Times
* More liberal vmin/vmax update for freesync
* Fix dmub access race condition
 
Cc: Daniel Wheeler <daniel.wheeler@amd.com>

Aurabindo Pillai (2):
  drm/amd/display: fix dmub access race condition
  drm/amd/display: more liberal vmin/vmax update for freesync

Duncan Ma (1):
  drm/amd/display: Adjust AUX-less ALPM setting

Fangzhi Zuo (1):
  drm/amd/display: Avoid Read Remote DPCD Many Times

Jingwen Zhu (1):
  drm/amd/display: limited pll vco w/a v2

Mario Limonciello (1):
  drm/amd/display: Revert "drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL
    value"

Muhammad Ahmed (1):
  drm/amd/display: Adding interface to log hw state when underflow
    happens

Paul Hsieh (1):
  drm/amd/display: update dpp/disp clock from smu clock table

Reza Amini (1):
  drm/amd/display: Fixing hubp programming of 3dlut fast load

Ryan Seto (1):
  drm/amd/display: Toggle for Disable Force Pstate Allow on Disable

Siyang Liu (1):
  drm/amd/display: fix a Null pointer dereference vulnerability

Taimur Hassan (1):
  drm/amd/display: Promote DC to 3.2.344

 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 81 ++++++++++++++++---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 14 ++++
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  9 ++-
 .../display/dc/clk_mgr/dcn301/vg_clk_mgr.c    | 16 ++++
 drivers/gpu/drm/amd/display/dc/core/dc.c      | 37 ++++++---
 drivers/gpu/drm/amd/display/dc/dc.h           | 59 +++++++++++++-
 .../amd/display/dc/dml/dcn301/dcn301_fpu.c    | 20 ++++-
 .../display/dc/hubbub/dcn30/dcn30_hubbub.c    | 31 +++++++
 .../display/dc/hubbub/dcn30/dcn30_hubbub.h    |  6 ++
 .../display/dc/hubbub/dcn31/dcn31_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn32/dcn32_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn35/dcn35_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn401/dcn401_hubbub.c  |  2 +
 .../amd/display/dc/hubp/dcn10/dcn10_hubp.h    |  9 ++-
 .../amd/display/dc/hubp/dcn20/dcn20_hubp.h    |  1 +
 .../amd/display/dc/hubp/dcn30/dcn30_hubp.c    | 26 ++++++
 .../amd/display/dc/hubp/dcn30/dcn30_hubp.h    |  8 +-
 .../amd/display/dc/hubp/dcn31/dcn31_hubp.c    | 15 ++++
 .../amd/display/dc/hubp/dcn31/dcn31_hubp.h    |  6 +-
 .../amd/display/dc/hubp/dcn32/dcn32_hubp.c    |  3 +
 .../amd/display/dc/hubp/dcn35/dcn35_hubp.c    |  3 +
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.c  | 41 ++++++++++
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.h  |  8 +-
 .../amd/display/dc/hwss/dcn10/dcn10_hwseq.c   | 63 +++++++++++----
 .../amd/display/dc/hwss/dcn30/dcn30_hwseq.c   | 48 +++++++++++
 .../amd/display/dc/hwss/dcn30/dcn30_hwseq.h   |  5 ++
 .../amd/display/dc/hwss/dcn30/dcn30_init.c    |  1 +
 .../amd/display/dc/hwss/dcn31/dcn31_init.c    |  1 +
 .../amd/display/dc/hwss/dcn314/dcn314_init.c  |  1 +
 .../amd/display/dc/hwss/dcn32/dcn32_init.c    |  1 +
 .../amd/display/dc/hwss/dcn35/dcn35_init.c    |  1 +
 .../amd/display/dc/hwss/dcn351/dcn351_init.c  |  1 +
 .../amd/display/dc/hwss/dcn401/dcn401_hwseq.c | 36 +++++----
 .../amd/display/dc/hwss/dcn401/dcn401_init.c  |  1 +
 .../drm/amd/display/dc/hwss/hw_sequencer.h    |  4 +
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |  2 +
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  | 21 ++++-
 drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h   | 11 +++
 .../link/protocols/link_edp_panel_control.c   |  2 +-
 .../dc/resource/dcn32/dcn32_resource.h        |  3 +-
 .../dc/resource/dcn401/dcn401_resource.c      |  1 +
 .../dc/resource/dcn401/dcn401_resource.h      |  3 +-
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h   |  1 +
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  3 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn31.c |  1 +
 45 files changed, 545 insertions(+), 67 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 01/12] drm/amd/display: fix a Null pointer dereference vulnerability
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:58 ` [PATCH 02/12] drm/amd/display: Adjust AUX-less ALPM setting Roman.Li
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Siyang Liu, Mario Limonciello

From: Siyang Liu <Security@tencent.com>

[Why]
A null pointer dereference vulnerability exists in the AMD display driver's
(DC module) cleanup function dc_destruct().
When display control context (dc->ctx) construction fails
(due to memory allocation failure), this pointer remains NULL.
During subsequent error handling when dc_destruct() is called,
there's no NULL check before dereferencing the perf_trace member
(dc->ctx->perf_trace), causing a kernel null pointer dereference crash.

[How]
Check if dc->ctx is non-NULL before dereferencing.

Link: https://lore.kernel.org/r/tencent_54FF4252EDFB6533090A491A25EEF3EDBF06@qq.com
Co-developed-by: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
(Updated commit text and removed unnecessary error message)
Signed-off-by: Siyang Liu <Security@tencent.com>
Signed-off-by: Roman Li <roman.li@amd.com>
Reviewed-by: Alex Hung <alex.hung@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index cf3893a2f8ce..e42cc37eae11 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -940,17 +940,18 @@ static void dc_destruct(struct dc *dc)
 	if (dc->link_srv)
 		link_destroy_link_service(&dc->link_srv);
 
-	if (dc->ctx->gpio_service)
-		dal_gpio_service_destroy(&dc->ctx->gpio_service);
+	if (dc->ctx) {
+		if (dc->ctx->gpio_service)
+			dal_gpio_service_destroy(&dc->ctx->gpio_service);
 
-	if (dc->ctx->created_bios)
-		dal_bios_parser_destroy(&dc->ctx->dc_bios);
+		if (dc->ctx->created_bios)
+			dal_bios_parser_destroy(&dc->ctx->dc_bios);
+		kfree(dc->ctx->logger);
+		dc_perf_trace_destroy(&dc->ctx->perf_trace);
 
-	kfree(dc->ctx->logger);
-	dc_perf_trace_destroy(&dc->ctx->perf_trace);
-
-	kfree(dc->ctx);
-	dc->ctx = NULL;
+		kfree(dc->ctx);
+		dc->ctx = NULL;
+	}
 
 	kfree(dc->bw_vbios);
 	dc->bw_vbios = NULL;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 02/12] drm/amd/display: Adjust AUX-less ALPM setting
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
  2025-07-30 18:58 ` [PATCH 01/12] drm/amd/display: fix a Null pointer dereference vulnerability Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:58 ` [PATCH 03/12] drm/amd/display: fix dmub access race condition Roman.Li
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Duncan Ma, Charlene Liu

From: Duncan Ma <Duncan.Ma@amd.com>

[Why & How]
Change ACDS period to support LTTPR.

Reviewed-by: Charlene Liu <charlene.liu@amd.com>
Signed-off-by: Duncan Ma <Duncan.Ma@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 .../drm/amd/display/dc/link/protocols/link_edp_panel_control.c  | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
index be714cbf6615..2c3e2945124a 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
@@ -1046,7 +1046,7 @@ bool edp_setup_replay(struct dc_link *link, const struct dc_stream_state *stream
 
 		if (link->replay_settings.config.alpm_mode == DC_ALPM_AUXLESS) {
 			alpm_config.bits.ALPM_MODE_SEL = 1;
-			alpm_config.bits.ACDS_PERIOD_DURATION = 1;
+			alpm_config.bits.ACDS_PERIOD_DURATION = 0;
 		}
 
 		dm_helpers_dp_write_dpcd(
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 03/12] drm/amd/display: fix dmub access race condition
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
  2025-07-30 18:58 ` [PATCH 01/12] drm/amd/display: fix a Null pointer dereference vulnerability Roman.Li
  2025-07-30 18:58 ` [PATCH 02/12] drm/amd/display: Adjust AUX-less ALPM setting Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-08-05 11:51   ` Klara Modin
  2025-07-30 18:58 ` [PATCH 04/12] drm/amd/display: more liberal vmin/vmax update for freesync Roman.Li
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Nicholas Kazlauskas

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

Accessing DC from amdgpu_dm is usually preceded by acquisition of
dc_lock mutex. Most of the DC API that DM calls are under a DC lock.
However, there are a few that are not. Some DC API called from interrupt
context end up sending DMUB commands via a DC API, while other threads were
using DMUB. This was apparent from a race between calls for setting idle
optimization enable/disable and the DC API to set vmin/vmax.

Offload the call to dc_stream_adjust_vmin_vmax() to a thread instead
of directly calling them from the interrupt handler such that it waits
for dc_lock.

Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 55 +++++++++++++++++--
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 14 +++++
 2 files changed, 63 insertions(+), 6 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index c71167ffdb76..6762fc7de769 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -530,6 +530,50 @@ static void dm_pflip_high_irq(void *interrupt_params)
 		      amdgpu_crtc->crtc_id, amdgpu_crtc, vrr_active, (int)!e);
 }
 
+static void dm_handle_vmin_vmax_update(struct work_struct *offload_work)
+{
+	struct vupdate_offload_work *work = container_of(offload_work, struct vupdate_offload_work, work);
+	struct amdgpu_device *adev = work->adev;
+	struct dc_stream_state *stream = work->stream;
+	struct dc_crtc_timing_adjust *adjust = work->adjust;
+
+	mutex_lock(&adev->dm.dc_lock);
+	dc_stream_adjust_vmin_vmax(adev->dm.dc, stream, adjust);
+	mutex_unlock(&adev->dm.dc_lock);
+
+	dc_stream_release(stream);
+	kfree(work->adjust);
+	kfree(work);
+}
+
+static void schedule_dc_vmin_vmax(struct amdgpu_device *adev,
+	struct dc_stream_state *stream,
+	struct dc_crtc_timing_adjust *adjust)
+{
+	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_KERNEL);
+	if (!offload_work) {
+		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate vupdate_offload_work\n");
+		return;
+	}
+
+	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_KERNEL);
+	if (!adjust_copy) {
+		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate adjust_copy\n");
+		kfree(offload_work);
+		return;
+	}
+
+	dc_stream_retain(stream);
+	memcpy(adjust_copy, adjust, sizeof(*adjust_copy));
+
+	INIT_WORK(&offload_work->work, dm_handle_vmin_vmax_update);
+	offload_work->adev = adev;
+	offload_work->stream = stream;
+	offload_work->adjust = adjust_copy;
+
+	queue_work(system_wq, &offload_work->work);
+}
+
 static void dm_vupdate_high_irq(void *interrupt_params)
 {
 	struct common_irq_params *irq_params = interrupt_params;
@@ -579,10 +623,9 @@ static void dm_vupdate_high_irq(void *interrupt_params)
 				    acrtc->dm_irq_params.stream,
 				    &acrtc->dm_irq_params.vrr_params);
 
-				dc_stream_adjust_vmin_vmax(
-				    adev->dm.dc,
-				    acrtc->dm_irq_params.stream,
-				    &acrtc->dm_irq_params.vrr_params.adjust);
+				schedule_dc_vmin_vmax(adev,
+					acrtc->dm_irq_params.stream,
+					&acrtc->dm_irq_params.vrr_params.adjust);
 				spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
 			}
 		}
@@ -672,8 +715,8 @@ static void dm_crtc_high_irq(void *interrupt_params)
 					     acrtc->dm_irq_params.stream,
 					     &acrtc->dm_irq_params.vrr_params);
 
-		dc_stream_adjust_vmin_vmax(adev->dm.dc, acrtc->dm_irq_params.stream,
-					   &acrtc->dm_irq_params.vrr_params.adjust);
+		schedule_dc_vmin_vmax(adev, acrtc->dm_irq_params.stream,
+				&acrtc->dm_irq_params.vrr_params.adjust);
 	}
 
 	/*
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index 67c3a7a967f2..94f312bae9ac 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -153,6 +153,20 @@ struct idle_workqueue {
 	bool running;
 };
 
+/**
+ * struct dm_vupdate_work - Work data for periodic action in idle
+ * @work: Kernel work data for the work event
+ * @adev: amdgpu_device back pointer
+ * @stream: DC stream associated with the crtc
+ * @adjust: DC CRTC timing adjust to be applied to the crtc
+ */
+struct vupdate_offload_work {
+	struct work_struct work;
+	struct amdgpu_device *adev;
+	struct dc_stream_state *stream;
+	struct dc_crtc_timing_adjust *adjust;
+};
+
 #define MAX_LUMINANCE_DATA_POINTS 99
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 04/12] drm/amd/display: more liberal vmin/vmax update for freesync
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (2 preceding siblings ...)
  2025-07-30 18:58 ` [PATCH 03/12] drm/amd/display: fix dmub access race condition Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:58 ` [PATCH 05/12] drm/amd/display: update dpp/disp clock from smu clock table Roman.Li
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Nicholas Kazlauskas, Ray Wu

From: Aurabindo Pillai <aurabindo.pillai@amd.com>

[Why]
FAMS2 expects vmin/vmax to be updated in the case when freesync is
off, but supported. But we only update it when freesync is enabled.

[How]
Change the vsync handler such that dc_stream_adjust_vmin_vmax() its called
irrespective of whether freesync is enabled. If freesync is supported,
then there is no harm in updating vmin/vmax registers.

Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/3546

Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
Signed-off-by: Ray Wu <ray.wu@amd.com>
Tested-by: Daniel Wheeler <daniel.wheeler@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
Reviewed-by: ChiaHsuan Chung <chiahsuan.chung@amd.com>
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 28 +++++++++++++------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 6762fc7de769..16347ca2396a 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -612,6 +612,11 @@ static void dm_vupdate_high_irq(void *interrupt_params)
 		 * if a pageflip happened inside front-porch.
 		 */
 		if (vrr_active) {
+			bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
+			bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
+			bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state
+				== VRR_STATE_ACTIVE_VARIABLE;
+
 			amdgpu_dm_crtc_handle_vblank(acrtc);
 
 			/* BTR processing for pre-DCE12 ASICs */
@@ -623,9 +628,11 @@ static void dm_vupdate_high_irq(void *interrupt_params)
 				    acrtc->dm_irq_params.stream,
 				    &acrtc->dm_irq_params.vrr_params);
 
-				schedule_dc_vmin_vmax(adev,
-					acrtc->dm_irq_params.stream,
-					&acrtc->dm_irq_params.vrr_params.adjust);
+				if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
+					schedule_dc_vmin_vmax(adev,
+						acrtc->dm_irq_params.stream,
+						&acrtc->dm_irq_params.vrr_params.adjust);
+				}
 				spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
 			}
 		}
@@ -708,15 +715,20 @@ static void dm_crtc_high_irq(void *interrupt_params)
 	spin_lock_irqsave(&adev_to_drm(adev)->event_lock, flags);
 
 	if (acrtc->dm_irq_params.stream &&
-	    acrtc->dm_irq_params.vrr_params.supported &&
-	    acrtc->dm_irq_params.freesync_config.state ==
-		    VRR_STATE_ACTIVE_VARIABLE) {
+		acrtc->dm_irq_params.vrr_params.supported) {
+		bool replay_en = acrtc->dm_irq_params.stream->link->replay_settings.replay_feature_enabled;
+		bool psr_en = acrtc->dm_irq_params.stream->link->psr_settings.psr_feature_enabled;
+		bool fs_active_var_en = acrtc->dm_irq_params.freesync_config.state == VRR_STATE_ACTIVE_VARIABLE;
+
 		mod_freesync_handle_v_update(adev->dm.freesync_module,
 					     acrtc->dm_irq_params.stream,
 					     &acrtc->dm_irq_params.vrr_params);
 
-		schedule_dc_vmin_vmax(adev, acrtc->dm_irq_params.stream,
-				&acrtc->dm_irq_params.vrr_params.adjust);
+		/* update vmin_vmax only if freesync is enabled, or only if PSR and REPLAY are disabled */
+		if (fs_active_var_en || (!fs_active_var_en && !replay_en && !psr_en)) {
+			schedule_dc_vmin_vmax(adev, acrtc->dm_irq_params.stream,
+					&acrtc->dm_irq_params.vrr_params.adjust);
+		}
 	}
 
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 05/12] drm/amd/display: update dpp/disp clock from smu clock table
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (3 preceding siblings ...)
  2025-07-30 18:58 ` [PATCH 04/12] drm/amd/display: more liberal vmin/vmax update for freesync Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:58 ` [PATCH 06/12] drm/amd/display: Revert "drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL value" Roman.Li
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Paul Hsieh, Nicholas Kazlauskas

From: Paul Hsieh <Paul.Hsieh@amd.com>

[Why]
The reason some high-resolution monitors fail to display properly
is that this platform does not support sufficiently high DPP and
DISP clock frequencies

[How]
Update DISP and DPP clocks from the smu clock table then DML can
filter these mode if not support.

Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Signed-off-by: Paul Hsieh <Paul.Hsieh@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 .../display/dc/clk_mgr/dcn301/vg_clk_mgr.c    | 16 +++++++++++++++
 .../amd/display/dc/dml/dcn301/dcn301_fpu.c    | 20 ++++++++++++++++---
 2 files changed, 33 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
index 9e2ef0e724fc..7aee02d56292 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
@@ -563,6 +563,7 @@ static void vg_clk_mgr_helper_populate_bw_params(
 {
 	int i, j;
 	struct clk_bw_params *bw_params = clk_mgr->base.bw_params;
+	uint32_t max_dispclk = 0, max_dppclk = 0;
 
 	j = -1;
 
@@ -584,6 +585,15 @@ static void vg_clk_mgr_helper_populate_bw_params(
 		return;
 	}
 
+	/* dispclk and dppclk can be max at any voltage, same number of levels for both */
+	if (clock_table->NumDispClkLevelsEnabled <= VG_NUM_DISPCLK_DPM_LEVELS &&
+	    clock_table->NumDispClkLevelsEnabled <= VG_NUM_DPPCLK_DPM_LEVELS) {
+		max_dispclk = find_max_clk_value(clock_table->DispClocks, clock_table->NumDispClkLevelsEnabled);
+		max_dppclk = find_max_clk_value(clock_table->DppClocks, clock_table->NumDispClkLevelsEnabled);
+	} else {
+		ASSERT(0);
+	}
+
 	bw_params->clk_table.num_entries = j + 1;
 
 	for (i = 0; i < bw_params->clk_table.num_entries - 1; i++, j--) {
@@ -591,11 +601,17 @@ static void vg_clk_mgr_helper_populate_bw_params(
 		bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[j].memclk;
 		bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[j].voltage;
 		bw_params->clk_table.entries[i].dcfclk_mhz = find_dcfclk_for_voltage(clock_table, clock_table->DfPstateTable[j].voltage);
+
+		/* Now update clocks we do read */
+		bw_params->clk_table.entries[i].dispclk_mhz = max_dispclk;
+		bw_params->clk_table.entries[i].dppclk_mhz = max_dppclk;
 	}
 	bw_params->clk_table.entries[i].fclk_mhz = clock_table->DfPstateTable[j].fclk;
 	bw_params->clk_table.entries[i].memclk_mhz = clock_table->DfPstateTable[j].memclk;
 	bw_params->clk_table.entries[i].voltage = clock_table->DfPstateTable[j].voltage;
 	bw_params->clk_table.entries[i].dcfclk_mhz = find_max_clk_value(clock_table->DcfClocks, VG_NUM_DCFCLK_DPM_LEVELS);
+	bw_params->clk_table.entries[i].dispclk_mhz = find_max_clk_value(clock_table->DispClocks, VG_NUM_DISPCLK_DPM_LEVELS);
+	bw_params->clk_table.entries[i].dppclk_mhz = find_max_clk_value(clock_table->DppClocks, VG_NUM_DPPCLK_DPM_LEVELS);
 
 	bw_params->vram_type = bios_info->memory_type;
 	bw_params->num_channels = bios_info->ma_channel_number;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c
index 0c0b2d67c9cd..2066a65c69bb 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn301/dcn301_fpu.c
@@ -326,7 +326,7 @@ void dcn301_fpu_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_p
 	struct dcn301_resource_pool *pool = TO_DCN301_RES_POOL(dc->res_pool);
 	struct clk_limit_table *clk_table = &bw_params->clk_table;
 	unsigned int i, closest_clk_lvl;
-	int j;
+	int j = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0;
 
 	dc_assert_fp_enabled();
 
@@ -338,6 +338,15 @@ void dcn301_fpu_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_p
 	dcn3_01_soc.num_chans = bw_params->num_channels;
 
 	ASSERT(clk_table->num_entries);
+
+	/* Prepass to find max clocks independent of voltage level. */
+	for (i = 0; i < clk_table->num_entries; ++i) {
+		if (clk_table->entries[i].dispclk_mhz > max_dispclk_mhz)
+			max_dispclk_mhz = clk_table->entries[i].dispclk_mhz;
+		if (clk_table->entries[i].dppclk_mhz > max_dppclk_mhz)
+			max_dppclk_mhz = clk_table->entries[i].dppclk_mhz;
+	}
+
 	for (i = 0; i < clk_table->num_entries; i++) {
 		/* loop backwards*/
 		for (closest_clk_lvl = 0, j = dcn3_01_soc.num_states - 1; j >= 0; j--) {
@@ -353,8 +362,13 @@ void dcn301_fpu_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_p
 		s[i].socclk_mhz = clk_table->entries[i].socclk_mhz;
 		s[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2;
 
-		s[i].dispclk_mhz = dcn3_01_soc.clock_limits[closest_clk_lvl].dispclk_mhz;
-		s[i].dppclk_mhz = dcn3_01_soc.clock_limits[closest_clk_lvl].dppclk_mhz;
+		/* Clocks independent of voltage level. */
+		s[i].dispclk_mhz = max_dispclk_mhz ? max_dispclk_mhz :
+			dcn3_01_soc.clock_limits[closest_clk_lvl].dispclk_mhz;
+
+		s[i].dppclk_mhz = max_dppclk_mhz ? max_dppclk_mhz :
+			dcn3_01_soc.clock_limits[closest_clk_lvl].dppclk_mhz;
+
 		s[i].dram_bw_per_chan_gbps =
 			dcn3_01_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;
 		s[i].dscclk_mhz = dcn3_01_soc.clock_limits[closest_clk_lvl].dscclk_mhz;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 06/12] drm/amd/display: Revert "drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL value"
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (4 preceding siblings ...)
  2025-07-30 18:58 ` [PATCH 05/12] drm/amd/display: update dpp/disp clock from smu clock table Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:58 ` [PATCH 07/12] drm/amd/display: Avoid Read Remote DPCD Many Times Roman.Li
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Mario Limonciello, stable

From: Mario Limonciello <mario.limonciello@amd.com>

This reverts commit 66abb996999de0d440a02583a6e70c2c24deab45.
This broke custom brightness curves but it wasn't obvious because
of other related changes. Custom brightness curves are always
from a 0-255 input signal. The correct fix was to fix the default
value which was done by [1].

Closes: https://gitlab.freedesktop.org/drm/amd/-/issues/4412
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/amd-gfx/0f094c4b-d2a3-42cd-824c-dc2858a5618d@kernel.org/T/#m69f875a7e69aa22df3370b3e3a9e69f4a61fdaf2

Reviewed-by: Alex Hung <alex.hung@amd.com>
Signed-off-by: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 16347ca2396a..31ea57edeb45 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -4800,16 +4800,16 @@ static int get_brightness_range(const struct amdgpu_dm_backlight_caps *caps,
 	return 1;
 }
 
-/* Rescale from [min..max] to [0..MAX_BACKLIGHT_LEVEL] */
+/* Rescale from [min..max] to [0..AMDGPU_MAX_BL_LEVEL] */
 static inline u32 scale_input_to_fw(int min, int max, u64 input)
 {
-	return DIV_ROUND_CLOSEST_ULL(input * MAX_BACKLIGHT_LEVEL, max - min);
+	return DIV_ROUND_CLOSEST_ULL(input * AMDGPU_MAX_BL_LEVEL, max - min);
 }
 
-/* Rescale from [0..MAX_BACKLIGHT_LEVEL] to [min..max] */
+/* Rescale from [0..AMDGPU_MAX_BL_LEVEL] to [min..max] */
 static inline u32 scale_fw_to_input(int min, int max, u64 input)
 {
-	return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), MAX_BACKLIGHT_LEVEL);
+	return min + DIV_ROUND_CLOSEST_ULL(input * (max - min), AMDGPU_MAX_BL_LEVEL);
 }
 
 static void convert_custom_brightness(const struct amdgpu_dm_backlight_caps *caps,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 07/12] drm/amd/display: Avoid Read Remote DPCD Many Times
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (5 preceding siblings ...)
  2025-07-30 18:58 ` [PATCH 06/12] drm/amd/display: Revert "drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL value" Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:58 ` [PATCH 08/12] drm/amd/display: limited pll vco w/a v2 Roman.Li
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Fangzhi Zuo, Wayne Lin

From: Fangzhi Zuo <Jerry.Zuo@amd.com>

Reading remote dpcd is time consuming. Instead of reading each byte
one by one, read 16 bytes together.

Reviewed-by: ChiaHsuan (Tom) Chung <chiahsuan.chung@amd.com>
Signed-off-by: Fangzhi Zuo <Jerry.Zuo@amd.com>
Signed-off-by: Wayne Lin <Wayne.Lin@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c  | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
index 137f18d41f1b..32281bf62f70 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
@@ -1763,14 +1763,17 @@ static bool dp_get_link_current_set_bw(struct drm_dp_aux *aux, uint32_t *cur_lin
 	union lane_count_set lane_count;
 	u8 dp_link_encoding;
 	u8 link_bw_set = 0;
+	u8 data[16] = {0};
 
 	*cur_link_bw = 0;
 
-	if (drm_dp_dpcd_read(aux, DP_MAIN_LINK_CHANNEL_CODING_SET, &dp_link_encoding, 1) != 1 ||
-		drm_dp_dpcd_read(aux, DP_LANE_COUNT_SET, &lane_count.raw, 1) != 1 ||
-		drm_dp_dpcd_read(aux, DP_LINK_BW_SET, &link_bw_set, 1) != 1)
+	if (drm_dp_dpcd_read(aux, DP_LINK_BW_SET, data, 16) != 16)
 		return false;
 
+	dp_link_encoding = data[DP_MAIN_LINK_CHANNEL_CODING_SET - DP_LINK_BW_SET];
+	link_bw_set = data[DP_LINK_BW_SET - DP_LINK_BW_SET];
+	lane_count.raw = data[DP_LANE_COUNT_SET - DP_LINK_BW_SET];
+
 	switch (dp_link_encoding) {
 	case DP_8b_10b_ENCODING:
 		link_rate = link_bw_set;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 08/12] drm/amd/display: limited pll vco w/a v2
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (6 preceding siblings ...)
  2025-07-30 18:58 ` [PATCH 07/12] drm/amd/display: Avoid Read Remote DPCD Many Times Roman.Li
@ 2025-07-30 18:58 ` Roman.Li
  2025-07-30 18:59 ` [PATCH 09/12] drm/amd/display: Fixing hubp programming of 3dlut fast load Roman.Li
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:58 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Jingwen Zhu, Nicholas Kazlauskas

From: Jingwen Zhu <Jingwen.Zhu@amd.com>

[Why/How]
The w/a will cause reboot black screen issue.

Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Signed-off-by: Jingwen Zhu <Jingwen.Zhu@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h       | 1 +
 drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h   | 3 ++-
 drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c | 1 +
 3 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
index 0bafb6710761..87b761ac3135 100644
--- a/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
+++ b/drivers/gpu/drm/amd/display/dmub/dmub_srv.h
@@ -316,6 +316,7 @@ struct dmub_srv_hw_params {
 	bool disable_sldo_opt;
 	bool enable_non_transparent_setconfig;
 	bool lower_hbr3_phy_ssc;
+	bool override_hbr3_pll_vco;
 };
 
 /**
diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
index dcae768c2cf4..79b5b1bb9b93 100644
--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
@@ -843,7 +843,8 @@ union dmub_fw_boot_options {
 		uint32_t ips_sequential_ono: 1; /**< 1 to enable sequential ONO IPS sequence */
 		uint32_t disable_sldo_opt: 1; /**< 1 to disable SLDO optimizations */
 		uint32_t lower_hbr3_phy_ssc: 1; /**< 1 to lower hbr3 phy ssc to 0.125 percent */
-		uint32_t reserved : 6; /**< reserved */
+		uint32_t override_hbr3_pll_vco: 1; /**< 1 to override the hbr3 pll vco to 0 */
+		uint32_t reserved : 5; /**< reserved */
 	} bits; /**< boot bits */
 	uint32_t all; /**< 32-bit access to bits */
 };
diff --git a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
index 3f38db752b84..4777c7203b2c 100644
--- a/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
+++ b/drivers/gpu/drm/amd/display/dmub/src/dmub_dcn31.c
@@ -377,6 +377,7 @@ void dmub_dcn31_enable_dmub_boot_options(struct dmub_srv *dmub, const struct dmu
 	boot_options.bits.dpia_hpd_int_enable_supported = params->dpia_hpd_int_enable_supported;
 	boot_options.bits.power_optimization = params->power_optimization;
 	boot_options.bits.lower_hbr3_phy_ssc = params->lower_hbr3_phy_ssc;
+	boot_options.bits.override_hbr3_pll_vco = params->override_hbr3_pll_vco;
 
 	boot_options.bits.sel_mux_phy_c_d_phy_f_g = (dmub->asic == DMUB_ASIC_DCN31B) ? 1 : 0;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 09/12] drm/amd/display: Fixing hubp programming of 3dlut fast load
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (7 preceding siblings ...)
  2025-07-30 18:58 ` [PATCH 08/12] drm/amd/display: limited pll vco w/a v2 Roman.Li
@ 2025-07-30 18:59 ` Roman.Li
  2025-07-30 18:59 ` [PATCH 10/12] drm/amd/display: Toggle for Disable Force Pstate Allow on Disable Roman.Li
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:59 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Reza Amini, Krunoslav Kovac

From: Reza Amini <reza.amini@amd.com>

[why]
HUBP needs to know the size of the lut's destination in MPC.
This is currently defaulted to 17, and needs to be set for specific
lut size.

[how]
Define and apply the missing hubp field. Taking this opportunity
to consolidate the programming of 3dlut into a hubp and mpc function.

Reviewed-by: Krunoslav Kovac <krunoslav.kovac@amd.com>
Signed-off-by: Reza Amini <reza.amini@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc.h           | 26 ++++++++
 .../amd/display/dc/hubp/dcn10/dcn10_hubp.h    |  1 +
 .../amd/display/dc/hubp/dcn20/dcn20_hubp.h    |  1 +
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.c  | 38 +++++++++++
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.h  |  4 ++
 .../amd/display/dc/hwss/dcn10/dcn10_hwseq.c   | 63 ++++++++++++++-----
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  | 19 +++++-
 drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h   | 11 ++++
 8 files changed, 148 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 5653c1673aec..c013b28207a1 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -1311,6 +1311,32 @@ union dc_3dlut_state {
 };
 
 
+#define MATRIX_9C__DIM_128_ALIGNED_LEN   16 // 9+8 :  9 * 8 +  7 * 8 = 72  + 56  = 128 % 128 = 0
+#define MATRIX_17C__DIM_128_ALIGNED_LEN  32 //17+15:  17 * 8 + 15 * 8 = 136 + 120 = 256 % 128 = 0
+#define MATRIX_33C__DIM_128_ALIGNED_LEN  64 //17+47:  17 * 8 + 47 * 8 = 136 + 376 = 512 % 128 = 0
+
+struct lut_rgb {
+	uint16_t b;
+	uint16_t g;
+	uint16_t r;
+	uint16_t padding;
+};
+
+//this structure maps directly to how the lut will read it from memory
+struct lut_mem_mapping {
+	union {
+		//NATIVE MODE 1, 2
+		//RGB layout          [b][g][r]      //red  is 128 byte aligned
+		//BGR layout          [r][g][b]      //blue is 128 byte aligned
+		struct lut_rgb rgb_17c[17][17][MATRIX_17C__DIM_128_ALIGNED_LEN];
+		struct lut_rgb rgb_33c[33][33][MATRIX_33C__DIM_128_ALIGNED_LEN];
+
+		//TRANSFORMED
+		uint16_t linear_rgb[(33*33*33*4/128+1)*128];
+	};
+	uint16_t size;
+};
+
 struct dc_rmcm_3dlut {
 	bool isInUse;
 	const struct dc_stream_state *stream;
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
index f8f991785d4f..0b7547d5b488 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
@@ -671,6 +671,7 @@ struct dcn_fl_regs_st {
 	uint32_t lut_done;
 	uint32_t lut_addr_mode;
 	uint32_t lut_width;
+	uint32_t lut_mpc_width;
 	uint32_t lut_tmz;
 	uint32_t lut_crossbar_sel_r;
 	uint32_t lut_crossbar_sel_g;
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.h
index 62369be070ea..f325db555102 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.h
@@ -264,6 +264,7 @@
 	type HUBP_3DLUT_DONE;\
 	type HUBP_3DLUT_ADDRESSING_MODE;\
 	type HUBP_3DLUT_WIDTH;\
+	type HUBP_3DLUT_MPC_WIDTH;\
 	type HUBP_3DLUT_TMZ;\
 	type HUBP_3DLUT_CROSSBAR_SELECT_Y_G;\
 	type HUBP_3DLUT_CROSSBAR_SELECT_CB_B;\
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
index 705b98b1b6cc..5028180ad80a 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
@@ -127,6 +127,43 @@ void hubp401_program_3dlut_fl_format(struct hubp *hubp, enum hubp_3dlut_fl_forma
 	REG_UPDATE(_3DLUT_FL_CONFIG, HUBP0_3DLUT_FL_FORMAT, format);
 }
 
+void hubp401_program_3dlut_fl_config(
+	struct hubp *hubp,
+	struct hubp_fl_3dlut_config *cfg)
+{
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+	uint32_t mpc_width = {(cfg->width == 17) ? 0 : 1};
+	uint32_t width = {cfg->width};
+
+	if (cfg->layout == DC_CM2_GPU_MEM_LAYOUT_1D_PACKED_LINEAR)
+		width = (cfg->width == 17) ? 4916 : 35940;
+
+	REG_UPDATE_2(_3DLUT_FL_CONFIG,
+		HUBP0_3DLUT_FL_MODE, cfg->mode,
+		HUBP0_3DLUT_FL_FORMAT, cfg->format);
+
+	REG_UPDATE_2(_3DLUT_FL_BIAS_SCALE,
+		HUBP0_3DLUT_FL_BIAS, cfg->bias,
+		HUBP0_3DLUT_FL_SCALE, cfg->scale);
+
+	REG_UPDATE(HUBP_3DLUT_ADDRESS_HIGH,
+		HUBP_3DLUT_ADDRESS_HIGH, cfg->address.lut3d.addr.high_part);
+	REG_UPDATE(HUBP_3DLUT_ADDRESS_LOW,
+		HUBP_3DLUT_ADDRESS_LOW, cfg->address.lut3d.addr.low_part);
+
+	//cross bar
+	REG_UPDATE_8(HUBP_3DLUT_CONTROL,
+		HUBP_3DLUT_MPC_WIDTH, mpc_width,
+		HUBP_3DLUT_WIDTH, width,
+		HUBP_3DLUT_CROSSBAR_SELECT_CR_R, cfg->crossbar_bit_slice_cr_r,
+		HUBP_3DLUT_CROSSBAR_SELECT_Y_G, cfg->crossbar_bit_slice_y_g,
+		HUBP_3DLUT_CROSSBAR_SELECT_CB_B, cfg->crossbar_bit_slice_cb_b,
+		HUBP_3DLUT_ADDRESSING_MODE, cfg->addr_mode,
+		HUBP_3DLUT_TMZ, cfg->protection_bits,
+		HUBP_3DLUT_ENABLE, cfg->enabled ? 1 : 0);
+}
+
 void hubp401_update_mall_sel(struct hubp *hubp, uint32_t mall_sel, bool c_cursor)
 {
 	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
@@ -1033,6 +1070,7 @@ static struct hubp_funcs dcn401_hubp_funcs = {
 	.hubp_program_3dlut_fl_crossbar = hubp401_program_3dlut_fl_crossbar,
 	.hubp_get_3dlut_fl_done = hubp401_get_3dlut_fl_done,
 	.hubp_clear_tiling = hubp401_clear_tiling,
+	.hubp_program_3dlut_fl_config = hubp401_program_3dlut_fl_config,
 };
 
 bool hubp401_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h
index 608e6153fa68..887b479ed1d7 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h
@@ -349,6 +349,10 @@ void hubp401_program_3dlut_fl_format(struct hubp *hubp, enum hubp_3dlut_fl_forma
 
 void hubp401_program_3dlut_fl_mode(struct hubp *hubp, enum hubp_3dlut_fl_mode mode);
 
+void hubp401_program_3dlut_fl_config(
+	struct hubp *hubp,
+	struct hubp_fl_3dlut_config *cfg);
+
 void hubp401_clear_tiling(struct hubp *hubp);
 
 void hubp401_vready_at_or_After_vsync(struct hubp *hubp,
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
index 39910f73ecd0..79c9bea78c47 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
@@ -328,19 +328,25 @@ static void dcn10_log_hubp_states(struct dc *dc, void *log_ctx)
 	}
 
 	DTN_INFO("\n=======HUBP FL======\n");
-	DTN_INFO(
-		"HUBP FL:  Enabled  Done  adr_mode  width  tmz  xbar_sel_R  xbar_sel_G  xbar_sel_B  adr_hi  adr_low  REFCYC  Bias   Scale       Mode      Format\n");
+	char pLabels[18][50] = {
+		"inst", "Enabled ", "Done ", "adr_mode ", "width ", "mpc_width ",
+		"tmz", "xbar_sel_R", "xbar_sel_G", "xbar_sel_B", "adr_hi ",
+		"adr_low", "REFCYC", "Bias", "Scale", "Mode",
+		"Format", "prefetch"};
+
 	for (i = 0; i < pool->pipe_count; i++) {
 		struct dcn_hubp_state *s = &(TO_DCN10_HUBP(pool->hubps[i])->state);
 		struct dcn_fl_regs_st *fl_regs = &s->fl_regs;
+		struct _vcs_dpi_display_dlg_regs_st *dlg_regs = &s->dlg_attr;
 
 		if (!s->blank_en) {
-			DTN_INFO("[%2d]:  %5xh  %6xh  %5d  %6d  %8xh  %2xh  %6xh  %6d  %8d  %8d  %7d  %8xh %5x %5x %5x",
+			uint32_t values[] = {
 					pool->hubps[i]->inst,
 					fl_regs->lut_enable,
 					fl_regs->lut_done,
 					fl_regs->lut_addr_mode,
 					fl_regs->lut_width,
+					fl_regs->lut_mpc_width,
 					fl_regs->lut_tmz,
 					fl_regs->lut_crossbar_sel_r,
 					fl_regs->lut_crossbar_sel_g,
@@ -351,8 +357,13 @@ static void dcn10_log_hubp_states(struct dc *dc, void *log_ctx)
 					fl_regs->lut_fl_bias,
 					fl_regs->lut_fl_scale,
 					fl_regs->lut_fl_mode,
-					fl_regs->lut_fl_format);
-			DTN_INFO("\n");
+					fl_regs->lut_fl_format,
+					dlg_regs->dst_y_prefetch};
+
+			int num_elements = 18;
+
+			for (int j = 0; j < num_elements; j++)
+				DTN_INFO("%s \t %8xh\n", pLabels[j], values[j]);
 		}
 	}
 
@@ -541,19 +552,43 @@ static void dcn10_log_color_state(struct dc *dc,
 		 dc->caps.color.mpc.ogam_ram,
 		 dc->caps.color.mpc.ocsc);
 	DTN_INFO("===== MPC RMCM 3DLUT =====\n");
-	DTN_INFO("MPCC:  SIZE  MODE  MODE_CUR  RD_SEL  30BIT_EN  WR_EN_MASK  RAM_SEL  OUT_NORM_FACTOR	FL_SEL	OUT_OFFSET	OUT_SCALE	FL_DONE	SOFT_UNDERFLOW	HARD_UNDERFLOW MEM_PWR_ST	FORCE	DIS	MODE\n");
+	char pLabels[19][50] = {
+		"MPCC", "SIZE", "MODE", "MODE_CUR", "RD_SEL",
+		"30BIT_EN", "WR_EN_MASK", "RAM_SEL", "OUT_NORM_FACTOR", "FL_SEL",
+		"OUT_OFFSET", "OUT_SCALE", "FL_DONE", "SOFT_UNDERFLOW", "HARD_UNDERFLOW",
+		"MEM_PWR_ST", "FORCE", "DIS", "MODE"};
+
 	for (i = 0; i < pool->mpcc_count; i++) {
 		struct mpcc_state s = {0};
 
 		pool->mpc->funcs->read_mpcc_state(pool->mpc, i, &s);
-		if (s.opp_id != 0xf)
-			DTN_INFO("[%2d]:  %4xh  %4xh  %6xh  %4x  %4x  %4x  %4x  %4x %4xh  %4xh  %6xh  %4x  %4x  %4x  %4x  %4x  %4x  %4x\n",
-				i, s.rmcm_regs.rmcm_3dlut_size, s.rmcm_regs.rmcm_3dlut_mode, s.rmcm_regs.rmcm_3dlut_mode_cur,
-				s.rmcm_regs.rmcm_3dlut_read_sel, s.rmcm_regs.rmcm_3dlut_30bit_en, s.rmcm_regs.rmcm_3dlut_wr_en_mask,
-				s.rmcm_regs.rmcm_3dlut_ram_sel, s.rmcm_regs.rmcm_3dlut_out_norm_factor, s.rmcm_regs.rmcm_3dlut_fl_sel,
-				s.rmcm_regs.rmcm_3dlut_out_offset_r, s.rmcm_regs.rmcm_3dlut_out_scale_r, s.rmcm_regs.rmcm_3dlut_fl_done,
-				s.rmcm_regs.rmcm_3dlut_fl_soft_underflow, s.rmcm_regs.rmcm_3dlut_fl_hard_underflow, s.rmcm_regs.rmcm_3dlut_mem_pwr_state,
-				s.rmcm_regs.rmcm_3dlut_mem_pwr_force, s.rmcm_regs.rmcm_3dlut_mem_pwr_dis, s.rmcm_regs.rmcm_3dlut_mem_pwr_mode);
+		if (s.opp_id != 0xf) {
+			uint32_t values[] = {
+				i,
+				s.rmcm_regs.rmcm_3dlut_size,
+				s.rmcm_regs.rmcm_3dlut_mode,
+				s.rmcm_regs.rmcm_3dlut_mode_cur,
+				s.rmcm_regs.rmcm_3dlut_read_sel,
+				s.rmcm_regs.rmcm_3dlut_30bit_en,
+				s.rmcm_regs.rmcm_3dlut_wr_en_mask,
+				s.rmcm_regs.rmcm_3dlut_ram_sel,
+				s.rmcm_regs.rmcm_3dlut_out_norm_factor,
+				s.rmcm_regs.rmcm_3dlut_fl_sel,
+				s.rmcm_regs.rmcm_3dlut_out_offset_r,
+				s.rmcm_regs.rmcm_3dlut_out_scale_r,
+				s.rmcm_regs.rmcm_3dlut_fl_done,
+				s.rmcm_regs.rmcm_3dlut_fl_soft_underflow,
+				s.rmcm_regs.rmcm_3dlut_fl_hard_underflow,
+				s.rmcm_regs.rmcm_3dlut_mem_pwr_state,
+				s.rmcm_regs.rmcm_3dlut_mem_pwr_force,
+				s.rmcm_regs.rmcm_3dlut_mem_pwr_dis,
+				s.rmcm_regs.rmcm_3dlut_mem_pwr_mode};
+
+			int num_elements = 19;
+
+			for (int j = 0; j < num_elements; j++)
+				DTN_INFO("%s \t %8xh\n", pLabels[j], values[j]);
+		}
 	}
 	DTN_INFO("\n");
 	DTN_INFO("===== MPC RMCM Shaper =====\n");
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
index cee29e89ec5c..198a28bd8e28 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
@@ -89,7 +89,7 @@ enum hubp_3dlut_fl_addressing_mode {
 enum hubp_3dlut_fl_width {
 	hubp_3dlut_fl_width_17 = 17,
 	hubp_3dlut_fl_width_33 = 33,
-	hubp_3dlut_fl_width_transformed = 4916
+	hubp_3dlut_fl_width_transformed    = 4916, //mpc default
 };
 
 enum hubp_3dlut_fl_crossbar_bit_slice {
@@ -99,6 +99,22 @@ enum hubp_3dlut_fl_crossbar_bit_slice {
 	hubp_3dlut_fl_crossbar_bit_slice_48_63 = 3
 };
 
+struct hubp_fl_3dlut_config {
+	bool enabled;
+	enum hubp_3dlut_fl_width width;
+	enum hubp_3dlut_fl_mode mode;
+	enum hubp_3dlut_fl_format format;
+	uint16_t bias;
+	uint16_t scale;
+	struct dc_plane_address address;
+	enum hubp_3dlut_fl_addressing_mode addr_mode;
+	enum dc_cm2_gpu_mem_layout layout;
+	uint8_t protection_bits;
+	enum hubp_3dlut_fl_crossbar_bit_slice crossbar_bit_slice_y_g;
+	enum hubp_3dlut_fl_crossbar_bit_slice crossbar_bit_slice_cb_b;
+	enum hubp_3dlut_fl_crossbar_bit_slice crossbar_bit_slice_cr_r;
+};
+
 struct hubp {
 	const struct hubp_funcs *funcs;
 	struct dc_context *ctx;
@@ -288,6 +304,7 @@ struct hubp_funcs {
 			enum hubp_3dlut_fl_crossbar_bit_slice bit_slice_cb_b,
 			enum hubp_3dlut_fl_crossbar_bit_slice bit_slice_cr_r);
 	int (*hubp_get_3dlut_fl_done)(struct hubp *hubp);
+	void (*hubp_program_3dlut_fl_config)(struct hubp *hubp, struct hubp_fl_3dlut_config *cfg);
 	void (*hubp_clear_tiling)(struct hubp *hubp);
 };
 
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
index 7641439f6ca0..14f0304e3eb9 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
@@ -115,6 +115,16 @@ enum MCM_LUT_ID {
 	MCM_LUT_SHAPER
 };
 
+struct mpc_fl_3dlut_config {
+	bool enabled;
+	uint16_t width;
+	bool select_lut_bank_a;
+	uint16_t bit_depth;
+	int hubp_index;
+	uint16_t bias;
+	uint16_t scale;
+};
+
 union mcm_lut_params {
 	const struct pwl_params *pwl;
 	const struct tetrahedral_params *lut3d;
@@ -1098,6 +1108,7 @@ struct mpc_funcs {
 	 * MPC RMCM new HW sequential programming functions
 	 */
 	struct {
+		void (*fl_3dlut_configure)(struct mpc *mpc, struct mpc_fl_3dlut_config *cfg, int mpcc_id);
 		void (*enable_3dlut_fl)(struct mpc *mpc, bool enable, int mpcc_id);
 		void (*update_3dlut_fast_load_select)(struct mpc *mpc, int mpcc_id, int hubp_idx);
 		void (*program_lut_read_write_control)(struct mpc *mpc, const enum MCM_LUT_ID id,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 10/12] drm/amd/display: Toggle for Disable Force Pstate Allow on Disable
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (8 preceding siblings ...)
  2025-07-30 18:59 ` [PATCH 09/12] drm/amd/display: Fixing hubp programming of 3dlut fast load Roman.Li
@ 2025-07-30 18:59 ` Roman.Li
  2025-07-30 18:59 ` [PATCH 11/12] drm/amd/display: Adding interface to log hw state when underflow happens Roman.Li
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:59 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Ryan Seto, Dillon Varone

From: Ryan Seto <ryanseto@amd.com>

[Why & How]
In theory, driver should be able to support disabling force pstate allow
after hardware release however this behavior is not tested yet.
Introducing a new toggle to disable the force on the fly.

Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Ryan Seto <ryanseto@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc.h           |  1 +
 .../amd/display/dc/hwss/dcn401/dcn401_hwseq.c | 36 +++++++++++--------
 .../dc/resource/dcn401/dcn401_resource.c      |  1 +
 3 files changed, 24 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index c013b28207a1..a83227f6bb14 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -1073,6 +1073,7 @@ struct dc_debug_options {
 	unsigned int force_mall_ss_num_ways;
 	bool alloc_extra_way_for_cursor;
 	uint32_t subvp_extra_lines;
+	bool disable_force_pstate_allow_on_hw_release;
 	bool force_usr_allow;
 	/* uses value at boot and disables switch */
 	bool disable_dtb_ref_clk_switch;
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
index fb949aeb1244..d5b5e2ce6ff6 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
@@ -1621,20 +1621,28 @@ void dcn401_unblank_stream(struct pipe_ctx *pipe_ctx,
 
 void dcn401_hardware_release(struct dc *dc)
 {
-	dc_dmub_srv_fams2_update_config(dc, dc->current_state, false);
-
-	/* If pstate unsupported, or still supported
-	 * by firmware, force it supported by dcn
-	 */
-	if (dc->current_state) {
-		if ((!dc->clk_mgr->clks.p_state_change_support ||
-				dc->current_state->bw_ctx.bw.dcn.fams2_global_config.features.bits.enable) &&
-				dc->res_pool->hubbub->funcs->force_pstate_change_control)
-			dc->res_pool->hubbub->funcs->force_pstate_change_control(
-					dc->res_pool->hubbub, true, true);
-
-		dc->current_state->bw_ctx.bw.dcn.clk.p_state_change_support = true;
-		dc->clk_mgr->funcs->update_clocks(dc->clk_mgr, dc->current_state, true);
+	if (!dc->debug.disable_force_pstate_allow_on_hw_release) {
+		dc_dmub_srv_fams2_update_config(dc, dc->current_state, false);
+
+		/* If pstate unsupported, or still supported
+		* by firmware, force it supported by dcn
+		*/
+		if (dc->current_state) {
+			if ((!dc->clk_mgr->clks.p_state_change_support ||
+					dc->current_state->bw_ctx.bw.dcn.fams2_global_config.features.bits.enable) &&
+					dc->res_pool->hubbub->funcs->force_pstate_change_control)
+				dc->res_pool->hubbub->funcs->force_pstate_change_control(
+						dc->res_pool->hubbub, true, true);
+
+			dc->current_state->bw_ctx.bw.dcn.clk.p_state_change_support = true;
+			dc->clk_mgr->funcs->update_clocks(dc->clk_mgr, dc->current_state, true);
+		}
+	} else {
+		if (dc->current_state) {
+			dc->clk_mgr->clks.p_state_change_support = false;
+			dc->clk_mgr->funcs->update_clocks(dc->clk_mgr, dc->current_state, true);
+		}
+		dc_dmub_srv_fams2_update_config(dc, dc->current_state, false);
 	}
 }
 
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
index b3988e38d0a6..068c123ea8a8 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
@@ -708,6 +708,7 @@ static const struct dc_debug_options debug_defaults_drv = {
 	},
 	.use_max_lb = true,
 	.force_disable_subvp = false,
+	.disable_force_pstate_allow_on_hw_release = false,
 	.exit_idle_opt_for_cursor_updates = true,
 	.using_dml2 = true,
 	.using_dml21 = true,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 11/12] drm/amd/display: Adding interface to log hw state when underflow happens
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (9 preceding siblings ...)
  2025-07-30 18:59 ` [PATCH 10/12] drm/amd/display: Toggle for Disable Force Pstate Allow on Disable Roman.Li
@ 2025-07-30 18:59 ` Roman.Li
  2025-07-30 18:59 ` [PATCH 12/12] drm/amd/display: Promote DC to 3.2.344 Roman.Li
  2025-08-05 13:04 ` [PATCH 00/12] DC Patches July 30, 2025 Wheeler, Daniel
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:59 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Muhammad Ahmed, Nicholas Kazlauskas

From: Muhammad Ahmed <Muhammad.Ahmed@amd.com>

[why]
Will help us better debug underflow issues.

Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Signed-off-by: Muhammad Ahmed <Muhammad.Ahmed@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/core/dc.c      | 18 +++++++
 drivers/gpu/drm/amd/display/dc/dc.h           | 30 ++++++++++++
 .../display/dc/hubbub/dcn30/dcn30_hubbub.c    | 31 ++++++++++++
 .../display/dc/hubbub/dcn30/dcn30_hubbub.h    |  6 +++
 .../display/dc/hubbub/dcn31/dcn31_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn32/dcn32_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn35/dcn35_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn401/dcn401_hubbub.c  |  2 +
 .../amd/display/dc/hubp/dcn10/dcn10_hubp.h    |  8 +++-
 .../amd/display/dc/hubp/dcn30/dcn30_hubp.c    | 26 ++++++++++
 .../amd/display/dc/hubp/dcn30/dcn30_hubp.h    |  8 +++-
 .../amd/display/dc/hubp/dcn31/dcn31_hubp.c    | 15 ++++++
 .../amd/display/dc/hubp/dcn31/dcn31_hubp.h    |  6 ++-
 .../amd/display/dc/hubp/dcn32/dcn32_hubp.c    |  3 ++
 .../amd/display/dc/hubp/dcn35/dcn35_hubp.c    |  3 ++
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.c  |  3 ++
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.h  |  4 +-
 .../amd/display/dc/hwss/dcn30/dcn30_hwseq.c   | 48 +++++++++++++++++++
 .../amd/display/dc/hwss/dcn30/dcn30_hwseq.h   |  5 ++
 .../amd/display/dc/hwss/dcn30/dcn30_init.c    |  1 +
 .../amd/display/dc/hwss/dcn31/dcn31_init.c    |  1 +
 .../amd/display/dc/hwss/dcn314/dcn314_init.c  |  1 +
 .../amd/display/dc/hwss/dcn32/dcn32_init.c    |  1 +
 .../amd/display/dc/hwss/dcn35/dcn35_init.c    |  1 +
 .../amd/display/dc/hwss/dcn351/dcn351_init.c  |  1 +
 .../amd/display/dc/hwss/dcn401/dcn401_init.c  |  1 +
 .../drm/amd/display/dc/hwss/hw_sequencer.h    |  4 ++
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |  2 +
 drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  |  2 +
 .../dc/resource/dcn32/dcn32_resource.h        |  3 +-
 .../dc/resource/dcn401/dcn401_resource.h      |  3 +-
 31 files changed, 236 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index e42cc37eae11..242bcb30dd34 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -6339,3 +6339,21 @@ bool dc_can_clear_cursor_limit(struct dc *dc)
 
 	return false;
 }
+
+void dc_get_underflow_debug_data_for_otg(struct dc *dc, int primary_otg_inst,
+				struct dc_underflow_debug_data *out_data)
+{
+	struct timing_generator *tg = NULL;
+
+	for (int i = 0; i < MAX_PIPES; i++) {
+		if (dc->res_pool->timing_generators[i] &&
+			dc->res_pool->timing_generators[i]->inst == primary_otg_inst) {
+				tg = dc->res_pool->timing_generators[i];
+				break;
+		}
+	}
+
+	dc_exit_ips_for_hw_access(dc);
+	if (dc->hwss.get_underflow_debug_data)
+		dc->hwss.get_underflow_debug_data(dc, tg, out_data);
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index a83227f6bb14..b68fbdd73dca 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -1819,6 +1819,23 @@ struct dc_surface_update {
 	struct dc_bias_and_scale bias_and_scale;
 };
 
+struct dc_underflow_debug_data {
+	uint32_t otg_inst;
+	uint32_t otg_underflow;
+	uint32_t h_position;
+	uint32_t v_position;
+	uint32_t otg_frame_count;
+	struct dc_underflow_per_hubp_debug_data {
+		uint32_t hubp_underflow;
+		uint32_t hubp_in_blank;
+		uint32_t hubp_readline;
+		uint32_t det_config_error;
+	} hubps[MAX_PIPES];
+	uint32_t curr_det_sizes[MAX_PIPES];
+	uint32_t target_det_sizes[MAX_PIPES];
+	uint32_t compbuf_config_error;
+};
+
 /*
  * Create a new surface with default parameters;
  */
@@ -2713,4 +2730,17 @@ bool dc_is_timing_changed(struct dc_stream_state *cur_stream,
 bool dc_is_cursor_limit_pending(struct dc *dc);
 bool dc_can_clear_cursor_limit(struct dc *dc);
 
+/**
+ * dc_get_underflow_debug_data_for_otg() - Retrieve underflow debug data.
+ *
+ * @dc: Pointer to the display core context.
+ * @primary_otg_inst: Instance index of the primary OTG that underflowed.
+ * @out_data: Pointer to a dc_underflow_debug_data struct to be filled with debug information.
+ *
+ * This function collects and logs underflow-related HW states when underflow happens,
+ * including OTG underflow status, current read positions, frame count, and per-HUBP debug data.
+ * The results are stored in the provided out_data structure for further analysis or logging.
+ */
+void dc_get_underflow_debug_data_for_otg(struct dc *dc, int primary_otg_inst, struct dc_underflow_debug_data *out_data);
+
 #endif /* DC_INTERFACE_H_ */
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
index d347bb06577a..e7e5f6d4778e 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
@@ -440,6 +440,35 @@ void hubbub3_init_watermarks(struct hubbub *hubbub)
 	REG_WRITE(DCHUBBUB_ARB_ALLOW_DRAM_CLK_CHANGE_WATERMARK_D, reg);
 }
 
+void hubbub3_get_det_sizes(struct hubbub *hubbub, uint32_t *curr_det_sizes, uint32_t *target_det_sizes)
+{
+	struct dcn20_hubbub *hubbub1 = TO_DCN20_HUBBUB(hubbub);
+
+	REG_GET_2(DCHUBBUB_DET0_CTRL, DET0_SIZE_CURRENT, &curr_det_sizes[0],
+		DET0_SIZE, &target_det_sizes[0]);
+
+	REG_GET_2(DCHUBBUB_DET1_CTRL, DET1_SIZE_CURRENT, &curr_det_sizes[1],
+		DET1_SIZE, &target_det_sizes[1]);
+
+	REG_GET_2(DCHUBBUB_DET2_CTRL, DET2_SIZE_CURRENT, &curr_det_sizes[2],
+		DET2_SIZE, &target_det_sizes[2]);
+
+	REG_GET_2(DCHUBBUB_DET3_CTRL, DET3_SIZE_CURRENT, &curr_det_sizes[3],
+		DET3_SIZE, &target_det_sizes[3]);
+
+}
+
+uint32_t hubbub3_compbuf_config_error(struct hubbub *hubbub)
+{
+	struct dcn20_hubbub *hubbub1 = TO_DCN20_HUBBUB(hubbub);
+	uint32_t compbuf_config_error = 0;
+
+	REG_GET(DCHUBBUB_COMPBUF_CTRL, CONFIG_ERROR,
+		&compbuf_config_error);
+
+	return compbuf_config_error;
+}
+
 static const struct hubbub_funcs hubbub30_funcs = {
 	.update_dchub = hubbub2_update_dchub,
 	.init_dchub_sys_ctx = hubbub3_init_dchub_sys_ctx,
@@ -457,6 +486,8 @@ static const struct hubbub_funcs hubbub30_funcs = {
 	.force_pstate_change_control = hubbub3_force_pstate_change_control,
 	.init_watermarks = hubbub3_init_watermarks,
 	.hubbub_read_state = hubbub2_read_state,
+	.get_det_sizes = hubbub3_get_det_sizes,
+	.compbuf_config_error = hubbub3_compbuf_config_error,
 };
 
 void hubbub3_construct(struct dcn20_hubbub *hubbub3,
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.h b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.h
index ca6233e8f1f4..49a469969d36 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.h
@@ -133,4 +133,10 @@ void hubbub3_force_pstate_change_control(struct hubbub *hubbub,
 
 void hubbub3_init_watermarks(struct hubbub *hubbub);
 
+void hubbub3_get_det_sizes(struct hubbub *hubbub,
+	uint32_t *curr_det_sizes,
+	uint32_t *target_det_sizes);
+
+uint32_t hubbub3_compbuf_config_error(struct hubbub *hubbub);
+
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
index b98505b240a7..cdb20251a154 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
@@ -1071,6 +1071,8 @@ static const struct hubbub_funcs hubbub31_funcs = {
 	.program_compbuf_size = dcn31_program_compbuf_size,
 	.init_crb = dcn31_init_crb,
 	.hubbub_read_state = hubbub2_read_state,
+	.get_det_sizes = hubbub3_get_det_sizes,
+	.compbuf_config_error = hubbub3_compbuf_config_error,
 };
 
 void hubbub31_construct(struct dcn20_hubbub *hubbub31,
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
index 32a6be543105..92957398ac0a 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
@@ -1009,6 +1009,8 @@ static const struct hubbub_funcs hubbub32_funcs = {
 	.force_usr_retraining_allow = hubbub32_force_usr_retraining_allow,
 	.set_request_limit = hubbub32_set_request_limit,
 	.get_mall_en = hubbub32_get_mall_en,
+	.get_det_sizes = hubbub3_get_det_sizes,
+	.compbuf_config_error = hubbub3_compbuf_config_error,
 };
 
 void hubbub32_construct(struct dcn20_hubbub *hubbub2,
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
index 6d41953011f5..a443722a8632 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
@@ -589,6 +589,8 @@ static const struct hubbub_funcs hubbub35_funcs = {
 	.hubbub_read_state = hubbub2_read_state,
 	.force_usr_retraining_allow = hubbub32_force_usr_retraining_allow,
 	.dchubbub_init = hubbub35_init,
+	.get_det_sizes = hubbub3_get_det_sizes,
+	.compbuf_config_error = hubbub3_compbuf_config_error,
 };
 
 void hubbub35_construct(struct dcn20_hubbub *hubbub2,
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c
index 92fab471b183..a36273a52880 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn401/dcn401_hubbub.c
@@ -1247,6 +1247,8 @@ static const struct hubbub_funcs hubbub4_01_funcs = {
 	.program_compbuf_segments = dcn401_program_compbuf_segments,
 	.wait_for_det_update = dcn401_wait_for_det_update,
 	.program_arbiter = dcn401_program_arbiter,
+	.get_det_sizes = hubbub3_get_det_sizes,
+	.compbuf_config_error = hubbub3_compbuf_config_error,
 };
 
 void hubbub401_construct(struct dcn20_hubbub *hubbub2,
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
index 0b7547d5b488..cf2eb9793008 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
@@ -104,7 +104,8 @@
 	SRI(DCN_SURF1_TTU_CNTL1, HUBPREQ, id),\
 	SRI(DCN_CUR0_TTU_CNTL0, HUBPREQ, id),\
 	SRI(DCN_CUR0_TTU_CNTL1, HUBPREQ, id),\
-	SRI(HUBP_CLK_CNTL, HUBP, id)
+	SRI(HUBP_CLK_CNTL, HUBP, id),\
+	SRI(HUBPRET_READ_LINE_VALUE, HUBPRET, id)
 
 /* Register address initialization macro for ASICs with VM */
 #define HUBP_REG_LIST_DCN_VM(id)\
@@ -249,7 +250,8 @@
 	uint32_t CURSOR_POSITION; \
 	uint32_t CURSOR_HOT_SPOT; \
 	uint32_t CURSOR_DST_OFFSET; \
-	uint32_t HUBP_CLK_CNTL
+	uint32_t HUBP_CLK_CNTL; \
+	uint32_t HUBPRET_READ_LINE_VALUE
 
 #define HUBP_SF(reg_name, field_name, post_fix)\
 	.field_name = reg_name ## __ ## field_name ## post_fix
@@ -622,6 +624,8 @@
 	type DCN_VM_SYSTEM_APERTURE_DEFAULT_SYSTEM;\
 	type DCN_VM_SYSTEM_APERTURE_DEFAULT_ADDR_MSB;\
 	type DCN_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB;\
+	type PIPE_READ_LINE;\
+	type HUBP_SEG_ALLOC_ERR_STATUS;\
 	/* todo:  get these from GVM instead of reading registers ourselves */\
 	type PAGE_DIRECTORY_ENTRY_HI32;\
 	type PAGE_DIRECTORY_ENTRY_LO32;\
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
index 0da70b50e86d..556214b2227d 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
@@ -505,6 +505,30 @@ void hubp3_init(struct hubp *hubp)
 	hubp_reset(hubp);
 }
 
+uint32_t hubp3_get_current_read_line(struct hubp *hubp)
+{
+	uint32_t read_line = 0;
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+	REG_GET(HUBPRET_READ_LINE_VALUE,
+		PIPE_READ_LINE,
+		&read_line);
+
+	return read_line;
+}
+
+unsigned int hubp3_get_underflow_status(struct hubp *hubp)
+{
+	uint32_t hubp_underflow = 0;
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+	REG_GET(DCHUBP_CNTL,
+		HUBP_UNDERFLOW_STATUS,
+		&hubp_underflow);
+
+	return hubp_underflow;
+}
+
 static struct hubp_funcs dcn30_hubp_funcs = {
 	.hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
 	.hubp_is_triplebuffer_enabled = hubp2_is_triplebuffer_enabled,
@@ -534,6 +558,8 @@ static struct hubp_funcs dcn30_hubp_funcs = {
 	.hubp_soft_reset = hubp1_soft_reset,
 	.hubp_set_flip_int = hubp1_set_flip_int,
 	.hubp_clear_tiling = hubp3_clear_tiling,
+	.hubp_get_underflow_status = hubp3_get_underflow_status,
+	.hubp_get_current_read_line = hubp3_get_current_read_line,
 };
 
 bool hubp3_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.h
index b7d7adf0b58c..842f4eb72cc8 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.h
@@ -243,7 +243,8 @@
 	HUBP_SF(HUBPREQ0_FLIP_PARAMETERS_6, REFCYC_PER_META_CHUNK_FLIP_C, mask_sh),\
 	HUBP_SF(HUBPREQ0_VBLANK_PARAMETERS_5, REFCYC_PER_VM_GROUP_VBLANK, mask_sh),\
 	HUBP_SF(HUBPREQ0_VBLANK_PARAMETERS_6, REFCYC_PER_VM_REQ_VBLANK, mask_sh),\
-	HUBP_SF(HUBP0_DCHUBP_REQ_SIZE_CONFIG, VM_GROUP_SIZE, mask_sh)
+	HUBP_SF(HUBP0_DCHUBP_REQ_SIZE_CONFIG, VM_GROUP_SIZE, mask_sh),\
+	HUBP_SF(HUBPRET0_HUBPRET_READ_LINE_VALUE, PIPE_READ_LINE, mask_sh)
 
 bool hubp3_construct(
 		struct dcn20_hubp *hubp2,
@@ -299,6 +300,11 @@ void hubp3_init(struct hubp *hubp);
 
 void hubp3_clear_tiling(struct hubp *hubp);
 
+uint32_t hubp3_get_current_read_line(struct hubp *hubp);
+
+uint32_t hubp3_get_underflow_status(struct hubp *hubp);
+
+
 #endif /* __DC_HUBP_DCN30_H__ */
 
 
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
index 7fd582a8a4ba..47101847c2b7 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
@@ -68,6 +68,18 @@ void hubp31_program_extended_blank_value(
 	hubp31_program_extended_blank(hubp, min_dst_y_next_start_optimized);
 }
 
+uint32_t hubp31_get_det_config_error(struct hubp *hubp)
+{
+	uint32_t config_error = 0;
+	struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+	REG_GET(DCHUBP_CNTL,
+		HUBP_SEG_ALLOC_ERR_STATUS,
+		&config_error);
+
+	return config_error;
+}
+
 static struct hubp_funcs dcn31_hubp_funcs = {
 	.hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
 	.hubp_is_triplebuffer_enabled = hubp2_is_triplebuffer_enabled,
@@ -98,6 +110,9 @@ static struct hubp_funcs dcn31_hubp_funcs = {
 	.hubp_in_blank = hubp1_in_blank,
 	.program_extended_blank = hubp31_program_extended_blank,
 	.hubp_clear_tiling = hubp3_clear_tiling,
+	.hubp_get_underflow_status = hubp3_get_underflow_status,
+	.hubp_get_current_read_line = hubp3_get_current_read_line,
+	.hubp_get_det_config_error = hubp31_get_det_config_error,
 };
 
 bool hubp31_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.h
index d688db79b750..5952c4671507 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.h
@@ -228,7 +228,9 @@
 	HUBP_SF(HUBPREQ0_FLIP_PARAMETERS_6, REFCYC_PER_META_CHUNK_FLIP_C, mask_sh),\
 	HUBP_SF(HUBPREQ0_VBLANK_PARAMETERS_5, REFCYC_PER_VM_GROUP_VBLANK, mask_sh),\
 	HUBP_SF(HUBPREQ0_VBLANK_PARAMETERS_6, REFCYC_PER_VM_REQ_VBLANK, mask_sh),\
-	HUBP_SF(HUBP0_DCHUBP_REQ_SIZE_CONFIG, VM_GROUP_SIZE, mask_sh)
+	HUBP_SF(HUBP0_DCHUBP_REQ_SIZE_CONFIG, VM_GROUP_SIZE, mask_sh),\
+	HUBP_SF(HUBPRET0_HUBPRET_READ_LINE_VALUE, PIPE_READ_LINE, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_CNTL, HUBP_SEG_ALLOC_ERR_STATUS, mask_sh)
 
 
 bool hubp31_construct(
@@ -246,4 +248,6 @@ void hubp31_set_unbounded_requesting(struct hubp *hubp, bool enable);
 void hubp31_program_extended_blank_value(
 	struct hubp *hubp, unsigned int min_dst_y_next_start_optimized);
 
+uint32_t hubp31_get_det_config_error(struct hubp *hubp);
+
 #endif /* __DC_HUBP_DCN31_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
index f3a21c623f44..a5f23bb2a76a 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
@@ -206,6 +206,9 @@ static struct hubp_funcs dcn32_hubp_funcs = {
 	.hubp_update_mall_sel = hubp32_update_mall_sel,
 	.hubp_prepare_subvp_buffering = hubp32_prepare_subvp_buffering,
 	.hubp_clear_tiling = hubp3_clear_tiling,
+	.hubp_get_underflow_status = hubp3_get_underflow_status,
+	.hubp_get_current_read_line = hubp3_get_current_read_line,
+	.hubp_get_det_config_error = hubp31_get_det_config_error,
 };
 
 bool hubp32_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
index 6d060ba12da8..b140808f21af 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
@@ -218,6 +218,9 @@ static struct hubp_funcs dcn35_hubp_funcs = {
 	.hubp_in_blank = hubp1_in_blank,
 	.program_extended_blank = hubp31_program_extended_blank_value,
 	.hubp_clear_tiling = hubp3_clear_tiling,
+	.hubp_get_underflow_status = hubp3_get_underflow_status,
+	.hubp_get_current_read_line = hubp3_get_current_read_line,
+	.hubp_get_det_config_error = hubp31_get_det_config_error,
 };
 
 bool hubp35_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
index 5028180ad80a..0fcbc6a35be6 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
@@ -1071,6 +1071,9 @@ static struct hubp_funcs dcn401_hubp_funcs = {
 	.hubp_get_3dlut_fl_done = hubp401_get_3dlut_fl_done,
 	.hubp_clear_tiling = hubp401_clear_tiling,
 	.hubp_program_3dlut_fl_config = hubp401_program_3dlut_fl_config,
+	.hubp_get_underflow_status = hubp3_get_underflow_status,
+	.hubp_get_current_read_line = hubp3_get_current_read_line,
+	.hubp_get_det_config_error = hubp31_get_det_config_error,
 };
 
 bool hubp401_construct(
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h
index 887b479ed1d7..fdabbeec8ffa 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.h
@@ -252,7 +252,9 @@
 	HUBP_SF(HUBP0_DCHUBP_MCACHEID_CONFIG, MCACHEID_MALL_PREF_1H_P0, mask_sh),\
 	HUBP_SF(HUBP0_DCHUBP_MCACHEID_CONFIG, MCACHEID_MALL_PREF_2H_P0, mask_sh),\
 	HUBP_SF(HUBP0_DCHUBP_MCACHEID_CONFIG, MCACHEID_MALL_PREF_1H_P1, mask_sh),\
-	HUBP_SF(HUBP0_DCHUBP_MCACHEID_CONFIG, MCACHEID_MALL_PREF_2H_P1, mask_sh)
+	HUBP_SF(HUBP0_DCHUBP_MCACHEID_CONFIG, MCACHEID_MALL_PREF_2H_P1, mask_sh),\
+	HUBP_SF(HUBPRET0_HUBPRET_READ_LINE_VALUE, PIPE_READ_LINE, mask_sh),\
+	HUBP_SF(HUBP0_DCHUBP_CNTL, HUBP_SEG_ALLOC_ERR_STATUS, mask_sh)
 
 void hubp401_update_mall_sel(struct hubp *hubp, uint32_t mall_sel, bool c_cursor);
 
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
index 37a239219dfe..139a63101488 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
@@ -1228,3 +1228,51 @@ void dcn30_wait_for_all_pending_updates(const struct pipe_ctx *pipe_ctx)
 		}
 	}
 }
+
+void dcn30_get_underflow_debug_data(const struct dc *dc,
+	struct timing_generator *tg,
+	struct dc_underflow_debug_data *out_data)
+{
+	struct hubbub *hubbub = dc->res_pool->hubbub;
+
+	if (tg) {
+		uint32_t v_blank_start = 0, v_blank_end = 0;
+
+		out_data->otg_inst = tg->inst;
+
+		tg->funcs->get_scanoutpos(tg,
+					  &v_blank_start,
+					  &v_blank_end,
+					  &out_data->h_position,
+					  &out_data->v_position);
+
+		out_data->otg_frame_count = tg->funcs->get_frame_count(tg);
+
+		out_data->otg_underflow = tg->funcs->is_optc_underflow_occurred(tg);
+	}
+
+	for (int i = 0; i < MAX_PIPES; i++) {
+		struct hubp *hubp = dc->res_pool->hubps[i];
+
+		if (hubp) {
+			if (hubp->funcs->hubp_get_underflow_status)
+				out_data->hubps[i].hubp_underflow = hubp->funcs->hubp_get_underflow_status(hubp);
+
+			if (hubp->funcs->hubp_in_blank)
+				out_data->hubps[i].hubp_in_blank = hubp->funcs->hubp_in_blank(hubp);
+
+			if (hubp->funcs->hubp_get_current_read_line)
+				out_data->hubps[i].hubp_readline = hubp->funcs->hubp_get_current_read_line(hubp);
+
+			if (hubp->funcs->hubp_get_det_config_error)
+				out_data->hubps[i].det_config_error = hubp->funcs->hubp_get_det_config_error(hubp);
+		}
+	}
+
+	if (hubbub->funcs->get_det_sizes)
+		hubbub->funcs->get_det_sizes(hubbub, out_data->curr_det_sizes, out_data->target_det_sizes);
+
+	if (hubbub->funcs->compbuf_config_error)
+		out_data->compbuf_config_error = hubbub->funcs->compbuf_config_error(hubbub);
+
+}
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h
index 4b90b781c4f2..40afbbfb5b9c 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.h
@@ -29,6 +29,7 @@
 #include "hw_sequencer_private.h"
 
 struct dc;
+struct dc_underflow_debug_data;
 
 void dcn30_init_hw(struct dc *dc);
 void dcn30_program_all_writeback_pipes_in_tree(
@@ -98,4 +99,8 @@ void dcn30_prepare_bandwidth(struct dc *dc,
 
 void dcn30_wait_for_all_pending_updates(const struct pipe_ctx *pipe_ctx);
 
+void dcn30_get_underflow_debug_data(const struct dc *dc,
+	struct timing_generator *tg,
+	struct dc_underflow_debug_data *out_data);
+
 #endif /* __DC_HWSS_DCN30_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c
index 2ac5d54d1626..d7ff55669bac 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_init.c
@@ -110,6 +110,7 @@ static const struct hw_sequencer_funcs dcn30_funcs = {
 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
 	.is_abm_supported = dcn21_is_abm_supported,
 	.wait_for_all_pending_updates = dcn30_wait_for_all_pending_updates,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn30_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c
index 556f4fe57eda..5a6a459da224 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn31/dcn31_init.c
@@ -112,6 +112,7 @@ static const struct hw_sequencer_funcs dcn31_funcs = {
 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
 	.setup_hpo_hw_control = dcn31_setup_hpo_hw_control,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn31_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
index f5112742edf9..a99145a30230 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_init.c
@@ -115,6 +115,7 @@ static const struct hw_sequencer_funcs dcn314_funcs = {
 	.update_visual_confirm_color = dcn10_update_visual_confirm_color,
 	.calculate_pix_rate_divider = dcn314_calculate_pix_rate_divider,
 	.setup_hpo_hw_control = dcn31_setup_hpo_hw_control,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn314_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c
index b971356d30b1..c19ef075c882 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_init.c
@@ -121,6 +121,7 @@ static const struct hw_sequencer_funcs dcn32_funcs = {
 	.calculate_pix_rate_divider = dcn32_calculate_pix_rate_divider,
 	.program_outstanding_updates = dcn32_program_outstanding_updates,
 	.wait_for_all_pending_updates = dcn30_wait_for_all_pending_updates,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn32_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
index a3ccf805bd16..52cc488416ac 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_init.c
@@ -128,6 +128,7 @@ static const struct hw_sequencer_funcs dcn35_funcs = {
 	.enable_plane = dcn20_enable_plane,
 	.update_dchubp_dpp = dcn20_update_dchubp_dpp,
 	.post_unlock_reset_opp = dcn20_post_unlock_reset_opp,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn35_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
index 58f2be2a326b..e34efcb7bde5 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn351/dcn351_init.c
@@ -123,6 +123,7 @@ static const struct hw_sequencer_funcs dcn351_funcs = {
 	.set_long_vtotal = dcn35_set_long_vblank,
 	.calculate_pix_rate_divider = dcn32_calculate_pix_rate_divider,
 	.setup_hpo_hw_control = dcn35_setup_hpo_hw_control,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn351_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
index fe7aceb2f510..d6e11b7e4fce 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_init.c
@@ -104,6 +104,7 @@ static const struct hw_sequencer_funcs dcn401_funcs = {
 	.enable_plane = dcn20_enable_plane,
 	.update_dchubp_dpp = dcn20_update_dchubp_dpp,
 	.post_unlock_reset_opp = dcn20_post_unlock_reset_opp,
+	.get_underflow_debug_data = dcn30_get_underflow_debug_data,
 };
 
 static const struct hwseq_private_funcs dcn401_private_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
index 9df8030e37f7..1723bbcf2c46 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
@@ -47,6 +47,7 @@ struct link_resource;
 struct dc_dmub_cmd;
 struct pg_block_update;
 struct drr_params;
+struct dc_underflow_debug_data;
 
 struct subvp_pipe_control_lock_fast_params {
 	struct dc *dc;
@@ -475,6 +476,9 @@ struct hw_sequencer_funcs {
 			struct dc_state *context);
 	void (*post_unlock_reset_opp)(struct dc *dc,
 			struct pipe_ctx *opp_head);
+	void (*get_underflow_debug_data)(const struct dc *dc,
+			struct timing_generator *tg,
+			struct dc_underflow_debug_data *out_data);
 };
 
 void color_space_to_black_color(
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
index 52b745667ef7..9bee45b36629 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/dchubbub.h
@@ -229,6 +229,8 @@ struct hubbub_funcs {
 	void (*program_compbuf_segments)(struct hubbub *hubbub, unsigned compbuf_size_seg, bool safe_to_increase);
 	void (*wait_for_det_update)(struct hubbub *hubbub, int hubp_inst);
 	bool (*program_arbiter)(struct hubbub *hubbub, struct dml2_display_arb_regs *arb_regs, bool safe_to_lower);
+	void (*get_det_sizes)(struct hubbub *hubbub, uint32_t *curr_det_sizes, uint32_t *target_det_sizes);
+	uint32_t (*compbuf_config_error)(struct hubbub *hubbub);
 };
 
 struct hubbub {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
index 198a28bd8e28..2b874d2cc61c 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
@@ -306,6 +306,8 @@ struct hubp_funcs {
 	int (*hubp_get_3dlut_fl_done)(struct hubp *hubp);
 	void (*hubp_program_3dlut_fl_config)(struct hubp *hubp, struct hubp_fl_3dlut_config *cfg);
 	void (*hubp_clear_tiling)(struct hubp *hubp);
+	uint32_t (*hubp_get_current_read_line)(struct hubp *hubp);
+	uint32_t (*hubp_get_det_config_error)(struct hubp *hubp);
 };
 
 #endif
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.h b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.h
index 82f966cf4ed2..20d714596021 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.h
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.h
@@ -1141,7 +1141,8 @@ unsigned int dcn32_get_max_hw_cursor_size(const struct dc *dc,
       SRI_ARR(DCN_SURF1_TTU_CNTL1, HUBPREQ, id),                               \
       SRI_ARR(DCN_CUR0_TTU_CNTL0, HUBPREQ, id),                                \
       SRI_ARR(DCN_CUR0_TTU_CNTL1, HUBPREQ, id),                                \
-      SRI_ARR(HUBP_CLK_CNTL, HUBP, id)
+      SRI_ARR(HUBP_CLK_CNTL, HUBP, id),                                        \
+      SRI_ARR(HUBPRET_READ_LINE_VALUE, HUBPRET, id)
 #define HUBP_REG_LIST_DCN2_COMMON_RI(id)                                       \
   HUBP_REG_LIST_DCN_RI(id), HUBP_REG_LIST_DCN_VM_RI(id),                       \
       SRI_ARR(PREFETCH_SETTINGS, HUBPREQ, id),                                 \
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.h b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.h
index 2ae6831c31ef..0fc66487d800 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.h
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.h
@@ -140,7 +140,8 @@ void dcn401_prepare_mcache_programming(struct dc *dc, struct dc_state *context);
 	SRI_ARR(UCLK_PSTATE_FORCE, HUBPREQ, id),                                 \
 	HUBP_3DLUT_FL_REG_LIST_DCN401(id),                                       \
 	SRI_ARR(DCSURF_VIEWPORT_MCACHE_SPLIT_COORDINATE, HUBP, id),              \
-	SRI_ARR(DCHUBP_MCACHEID_CONFIG, HUBP, id)
+	SRI_ARR(DCHUBP_MCACHEID_CONFIG, HUBP, id),								 \
+	SRI_ARR(HUBPRET_READ_LINE_VALUE, HUBPRET, id)
 
 /* ABM */
 #define ABM_DCN401_REG_LIST_RI(id)                                            \
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 12/12] drm/amd/display: Promote DC to 3.2.344
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (10 preceding siblings ...)
  2025-07-30 18:59 ` [PATCH 11/12] drm/amd/display: Adding interface to log hw state when underflow happens Roman.Li
@ 2025-07-30 18:59 ` Roman.Li
  2025-08-05 13:04 ` [PATCH 00/12] DC Patches July 30, 2025 Wheeler, Daniel
  12 siblings, 0 replies; 16+ messages in thread
From: Roman.Li @ 2025-07-30 18:59 UTC (permalink / raw)
  To: amd-gfx
  Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Taimur Hassan

From: Taimur Hassan <Syed.Hassan@amd.com>

Summary:
* Add interface to log hw state when underflow happens
* Fix hubp programming of 3dlut fast load
* Avoid Read Remote DPCD Many Times
* More liberal vmin/vmax update for freesync
* Fix dmub access race condition

Acked-by: Sun peng (Leo) Li <sunpeng.li@amd.com>
Signed-off-by: Taimur Hassan <Syed.Hassan@amd.com>
Signed-off-by: Roman Li <roman.li@amd.com>
---
 drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index b68fbdd73dca..18c68e531f71 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -55,7 +55,7 @@ struct aux_payload;
 struct set_config_cmd_payload;
 struct dmub_notification;
 
-#define DC_VER "3.2.343"
+#define DC_VER "3.2.344"
 
 /**
  * MAX_SURFACES - representative of the upper bound of surfaces that can be piped to a single CRTC
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 03/12] drm/amd/display: fix dmub access race condition
  2025-07-30 18:58 ` [PATCH 03/12] drm/amd/display: fix dmub access race condition Roman.Li
@ 2025-08-05 11:51   ` Klara Modin
  2025-08-18 11:31     ` Klara Modin
  0 siblings, 1 reply; 16+ messages in thread
From: Klara Modin @ 2025-08-05 11:51 UTC (permalink / raw)
  To: Roman.Li
  Cc: amd-gfx, Harry Wentland, Leo Li, Aurabindo Pillai, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Nicholas Kazlauskas

Hi,

On 2025-07-30 14:58:54 -0400, Roman.Li@amd.com wrote:
> From: Aurabindo Pillai <aurabindo.pillai@amd.com>
> 
> Accessing DC from amdgpu_dm is usually preceded by acquisition of
> dc_lock mutex. Most of the DC API that DM calls are under a DC lock.
> However, there are a few that are not. Some DC API called from interrupt
> context end up sending DMUB commands via a DC API, while other threads were
> using DMUB. This was apparent from a race between calls for setting idle
> optimization enable/disable and the DC API to set vmin/vmax.
> 
> Offload the call to dc_stream_adjust_vmin_vmax() to a thread instead
> of directly calling them from the interrupt handler such that it waits
> for dc_lock.
> 
> Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
> Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
> Signed-off-by: Roman Li <roman.li@amd.com>

With this patch I get sleeping function called from invalid context
roughly every second. This occurs on at least PREEMPT_LAZY and
PREEMPT_VOLUNTARY.

BUG: sleeping function called from invalid context at include/linux/sched/mm.h:321
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 13841, name: cc1
preempt_count: 10002, expected: 0
CPU: 23 UID: 1000 PID: 13841 Comm: cc1 Kdump: loaded Tainted: G        W           6.16.0-next-20250805 #631 PREEMPTLAZY 
Tainted: [W]=WARN
Hardware name: Micro-Star International Co., Ltd. MS-7C91/MAG B550 TOMAHAWK (MS-7C91), BIOS A.I0 03/20/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x4b/0x70
 __might_resched.cold+0xac/0xb9
 __kmalloc_cache_noprof+0x2d4/0x430
 ? schedule_dc_vmin_vmax+0x49/0x1c0 [amdgpu]
 ? __alloc_frozen_pages_noprof+0x18f/0x360
 ? _raw_spin_unlock_irqrestore+0x12/0x40
 ? schedule_dc_vmin_vmax+0x49/0x1c0 [amdgpu]
 schedule_dc_vmin_vmax+0x49/0x1c0 [amdgpu]
 dm_crtc_high_irq+0x2ab/0x310 [amdgpu]
 amdgpu_dm_irq_handler+0x8d/0x210 [amdgpu]
 amdgpu_irq_dispatch+0x166/0x1a0 [amdgpu]
 amdgpu_ih_process+0x60/0x160 [amdgpu]
 amdgpu_irq_handler+0x23/0x60 [amdgpu]
 __handle_irq_event_percpu+0x4a/0x1a0
 handle_irq_event+0x38/0x90
 handle_edge_irq+0xc4/0x190
 __common_interrupt+0x44/0xe0
 ? srso_return_thunk+0x5/0x5f
 common_interrupt+0x3a/0xa0
 asm_common_interrupt+0x26/0x40
RIP: 0033:0x55d4848fab81
Code: 28 44 8b 5f 70 ff 87 80 00 00 00 49 89 d7 48 89 34 24 45 8d 73 ff 44 89 f3 21 cb 48 8b 4f 58 41 89 dd 49 c1 e5 03 4a 8b 2c 29 <48> 85 ed 0f 84 36 02 00 00 48 83 fd ff 0f 84 bc 00 00 00 44 39 4d
RSP: 002b:00007fff71eeed50 EFLAGS: 00000212
RAX: 000000000000001f RBX: 0000000000002182 RCX: 00007f47b3f00010
RDX: 000000000000001f RSI: 000055d4bfce1e3c RDI: 000055d4bfbb2e80
RBP: 0000000000000000 R08: 0000000000000001 R09: 000000006d35e182
R10: 000055d4bfb94630 R11: 0000000000004000 R12: 000055d4bfbb2e80
R13: 0000000000010c10 R14: 0000000000003fff R15: 000000000000001f
 </TASK>

...

> +static void schedule_dc_vmin_vmax(struct amdgpu_device *adev,
> +	struct dc_stream_state *stream,
> +	struct dc_crtc_timing_adjust *adjust)
> +{
> +	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_KERNEL);
> +	if (!offload_work) {
> +		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate vupdate_offload_work\n");
> +		return;
> +	}
> +
> +	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_KERNEL);
> +	if (!adjust_copy) {
> +		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate adjust_copy\n");
> +		kfree(offload_work);
> +		return;
> +	}
> +
> +	dc_stream_retain(stream);
> +	memcpy(adjust_copy, adjust, sizeof(*adjust_copy));
> +
> +	INIT_WORK(&offload_work->work, dm_handle_vmin_vmax_update);
> +	offload_work->adev = adev;
> +	offload_work->stream = stream;
> +	offload_work->adjust = adjust_copy;
> +
> +	queue_work(system_wq, &offload_work->work);
> +}
> +

The allocations in this function seems to be the culprit. If I change
them to GFP_ATOMIC the issue disappears:

diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 31ea57edeb45..afe0fea13bb0 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -550,13 +550,13 @@ static void schedule_dc_vmin_vmax(struct amdgpu_device *adev,
 	struct dc_stream_state *stream,
 	struct dc_crtc_timing_adjust *adjust)
 {
-	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_KERNEL);
+	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_ATOMIC);
 	if (!offload_work) {
 		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate vupdate_offload_work\n");
 		return;
 	}
 
-	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_KERNEL);
+	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_ATOMIC);
 	if (!adjust_copy) {
 		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate adjust_copy\n");
 		kfree(offload_work);

...

Regards,
Klara Modin

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* RE: [PATCH 00/12] DC Patches July 30, 2025
  2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
                   ` (11 preceding siblings ...)
  2025-07-30 18:59 ` [PATCH 12/12] drm/amd/display: Promote DC to 3.2.344 Roman.Li
@ 2025-08-05 13:04 ` Wheeler, Daniel
  12 siblings, 0 replies; 16+ messages in thread
From: Wheeler, Daniel @ 2025-08-05 13:04 UTC (permalink / raw)
  To: Li, Roman, amd-gfx@lists.freedesktop.org
  Cc: Wentland, Harry, Li, Sun peng (Leo), Pillai, Aurabindo, Li, Roman,
	Lin, Wayne, Chung, ChiaHsuan (Tom), Zuo, Jerry, Wu, Ray,
	LIPSKI, IVAN, Hung, Alex, Li, Roman

[Public]

Hi all,

This week this patchset was tested on 4 systems, two dGPU and two APU based, and tested across multiple display and connection types.

APU
        * Single Display eDP -> 1080p 60hz, 1920x1200 165hz
        * Single Display DP (SST DSC) -> 4k144hz, 4k240hz
        * Multi display -> eDP + DP/HDMI/USB-C -> 1080p 60hz eDP + 4k 144hz, 4k 240hz (Includes USB-C to DP/HDMI adapters)
        * Thunderbolt -> LG Ultrafine 5k
        * MST DSC -> Cable Matters 101075 (DP to 3x DP) with 3x 4k60hz displays, HP Hook G2 with 2x 4k60hz displays
        * USB 4 -> HP Hook G4, Lenovo Thunderbolt Dock, both with 2x 4k60hz DP and 1x 4k60hz HDMI displays
        * SST PCON -> Club3D CAC-1085 + 1x 4k 144hz, FRL3, at a max resolution supported by the dongle of 4k 120hz YUV420 12bpc.
        * MST PCON -> 1x 4k 144hz, FRL3, at a max resolution supported by the adapter of 4k 120hz RGB 8bpc.

DGPU
        * Single Display DP (SST DSC) -> 4k144hz, 4k240hz
        * Multiple Display DP -> 4k240hz + 4k144hz
        * MST (Startech MST14DP123DP [DP to 3x DP] and 2x 4k 60hz displays)
        * MST DSC (with Cable Matters 101075 [DP to 3x DP] with 3x 4k60hz displays)

The testing is a mix of automated and manual tests. Manual testing includes (but is not limited to)
        * Changing display configurations and settings
        * Video/Audio playback
        * Benchmark testing
        * Suspend/Resume testing
        * Feature testing (Freesync, HDCP, etc.)

Automated testing includes (but is not limited to)
        * Script testing (scripts to automate some of the manual checks)
        * IGT testing

The testing is mainly tested on the following displays, but occasionally there are tests with other displays
        * Samsung G8 Neo 4k240hz
        * Samsung QN55QN95B 4k 120hz
        * Acer XV322QKKV 4k144hz
        * HP U27 4k Wireless 4k60hz
        * LG 27UD58B 4k60hz
        * LG 32UN650WA 4k60hz
        * LG Ultrafine 5k 5k60hz
        * AU Optronics B140HAN01.1 1080p 60hz eDP
        * AU Optronics B160UAN01.J 1920x1200 165hz eDP

The patchset consists of the amd-staging-drm-next branch (Head commit - 6795226cf20783be8a3ede94a26c1bc8976ad697 -> drm/amdgpu: Add chain runlists support to GC9.4.2) with new patches added on top of it.

Tested on Ubuntu 24.04.3, on Wayland and X11, using KDE Plasma and Gnome.

Tested-by: Daniel Wheeler <daniel.wheeler@amd.com>



Thank you,

Dan Wheeler
Sr. Technologist | AMD
SW Display
------------------------------------------------------------------------------------------------------------------
1 Commerce Valley Dr E, Thornhill, ON L3T 7X6
amd.com

-----Original Message-----
From: Roman.Li@amd.com <Roman.Li@amd.com>
Sent: Wednesday, July 30, 2025 2:59 PM
To: amd-gfx@lists.freedesktop.org
Cc: Wentland, Harry <Harry.Wentland@amd.com>; Li, Sun peng (Leo) <Sunpeng.Li@amd.com>; Pillai, Aurabindo <Aurabindo.Pillai@amd.com>; Li, Roman <Roman.Li@amd.com>; Lin, Wayne <Wayne.Lin@amd.com>; Chung, ChiaHsuan (Tom) <ChiaHsuan.Chung@amd.com>; Zuo, Jerry <Jerry.Zuo@amd.com>; Wheeler, Daniel <Daniel.Wheeler@amd.com>; Wu, Ray <Ray.Wu@amd.com>; LIPSKI, IVAN <IVAN.LIPSKI@amd.com>; Hung, Alex <Alex.Hung@amd.com>; Li, Roman <Roman.Li@amd.com>
Subject: [PATCH 00/12] DC Patches July 30, 2025

From: Roman Li <Roman.Li@amd.com>

This DC patchset brings improvements in multiple areas. In summary, we highlight:

* Add interface to log hw state when underflow happens
* Fix hubp programming of 3dlut fast load
* Avoid Read Remote DPCD Many Times
* More liberal vmin/vmax update for freesync
* Fix dmub access race condition

Cc: Daniel Wheeler <daniel.wheeler@amd.com>

Aurabindo Pillai (2):
  drm/amd/display: fix dmub access race condition
  drm/amd/display: more liberal vmin/vmax update for freesync

Duncan Ma (1):
  drm/amd/display: Adjust AUX-less ALPM setting

Fangzhi Zuo (1):
  drm/amd/display: Avoid Read Remote DPCD Many Times

Jingwen Zhu (1):
  drm/amd/display: limited pll vco w/a v2

Mario Limonciello (1):
  drm/amd/display: Revert "drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL
    value"

Muhammad Ahmed (1):
  drm/amd/display: Adding interface to log hw state when underflow
    happens

Paul Hsieh (1):
  drm/amd/display: update dpp/disp clock from smu clock table

Reza Amini (1):
  drm/amd/display: Fixing hubp programming of 3dlut fast load

Ryan Seto (1):
  drm/amd/display: Toggle for Disable Force Pstate Allow on Disable

Siyang Liu (1):
  drm/amd/display: fix a Null pointer dereference vulnerability

Taimur Hassan (1):
  drm/amd/display: Promote DC to 3.2.344

 .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 81 ++++++++++++++++---  .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 14 ++++
 .../display/amdgpu_dm/amdgpu_dm_mst_types.c   |  9 ++-
 .../display/dc/clk_mgr/dcn301/vg_clk_mgr.c    | 16 ++++
 drivers/gpu/drm/amd/display/dc/core/dc.c      | 37 ++++++---
 drivers/gpu/drm/amd/display/dc/dc.h           | 59 +++++++++++++-
 .../amd/display/dc/dml/dcn301/dcn301_fpu.c    | 20 ++++-
 .../display/dc/hubbub/dcn30/dcn30_hubbub.c    | 31 +++++++
 .../display/dc/hubbub/dcn30/dcn30_hubbub.h    |  6 ++
 .../display/dc/hubbub/dcn31/dcn31_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn32/dcn32_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn35/dcn35_hubbub.c    |  2 +
 .../display/dc/hubbub/dcn401/dcn401_hubbub.c  |  2 +
 .../amd/display/dc/hubp/dcn10/dcn10_hubp.h    |  9 ++-
 .../amd/display/dc/hubp/dcn20/dcn20_hubp.h    |  1 +
 .../amd/display/dc/hubp/dcn30/dcn30_hubp.c    | 26 ++++++
 .../amd/display/dc/hubp/dcn30/dcn30_hubp.h    |  8 +-
 .../amd/display/dc/hubp/dcn31/dcn31_hubp.c    | 15 ++++
 .../amd/display/dc/hubp/dcn31/dcn31_hubp.h    |  6 +-
 .../amd/display/dc/hubp/dcn32/dcn32_hubp.c    |  3 +
 .../amd/display/dc/hubp/dcn35/dcn35_hubp.c    |  3 +
 .../amd/display/dc/hubp/dcn401/dcn401_hubp.c  | 41 ++++++++++  .../amd/display/dc/hubp/dcn401/dcn401_hubp.h  |  8 +-
 .../amd/display/dc/hwss/dcn10/dcn10_hwseq.c   | 63 +++++++++++----
 .../amd/display/dc/hwss/dcn30/dcn30_hwseq.c   | 48 +++++++++++
 .../amd/display/dc/hwss/dcn30/dcn30_hwseq.h   |  5 ++
 .../amd/display/dc/hwss/dcn30/dcn30_init.c    |  1 +
 .../amd/display/dc/hwss/dcn31/dcn31_init.c    |  1 +
 .../amd/display/dc/hwss/dcn314/dcn314_init.c  |  1 +
 .../amd/display/dc/hwss/dcn32/dcn32_init.c    |  1 +
 .../amd/display/dc/hwss/dcn35/dcn35_init.c    |  1 +
 .../amd/display/dc/hwss/dcn351/dcn351_init.c  |  1 +  .../amd/display/dc/hwss/dcn401/dcn401_hwseq.c | 36 +++++----  .../amd/display/dc/hwss/dcn401/dcn401_init.c  |  1 +
 .../drm/amd/display/dc/hwss/hw_sequencer.h    |  4 +
 .../gpu/drm/amd/display/dc/inc/hw/dchubbub.h  |  2 +  drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h  | 21 ++++-
 drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h   | 11 +++
 .../link/protocols/link_edp_panel_control.c   |  2 +-
 .../dc/resource/dcn32/dcn32_resource.h        |  3 +-
 .../dc/resource/dcn401/dcn401_resource.c      |  1 +
 .../dc/resource/dcn401/dcn401_resource.h      |  3 +-
 drivers/gpu/drm/amd/display/dmub/dmub_srv.h   |  1 +
 .../gpu/drm/amd/display/dmub/inc/dmub_cmd.h   |  3 +-
 .../gpu/drm/amd/display/dmub/src/dmub_dcn31.c |  1 +
 45 files changed, 545 insertions(+), 67 deletions(-)

--
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 03/12] drm/amd/display: fix dmub access race condition
  2025-08-05 11:51   ` Klara Modin
@ 2025-08-18 11:31     ` Klara Modin
  0 siblings, 0 replies; 16+ messages in thread
From: Klara Modin @ 2025-08-18 11:31 UTC (permalink / raw)
  To: Roman.Li
  Cc: amd-gfx, Harry Wentland, Leo Li, Aurabindo Pillai, Wayne Lin,
	Tom Chung, Fangzhi Zuo, Daniel Wheeler, Ray Wu, Ivan Lipski,
	Alex Hung, Nicholas Kazlauskas

Hi,

On 2025-08-05 13:51:41 +0200, Klara Modin wrote:
> Hi,
> 
> On 2025-07-30 14:58:54 -0400, Roman.Li@amd.com wrote:
> > From: Aurabindo Pillai <aurabindo.pillai@amd.com>
> > 
> > Accessing DC from amdgpu_dm is usually preceded by acquisition of
> > dc_lock mutex. Most of the DC API that DM calls are under a DC lock.
> > However, there are a few that are not. Some DC API called from interrupt
> > context end up sending DMUB commands via a DC API, while other threads were
> > using DMUB. This was apparent from a race between calls for setting idle
> > optimization enable/disable and the DC API to set vmin/vmax.
> > 
> > Offload the call to dc_stream_adjust_vmin_vmax() to a thread instead
> > of directly calling them from the interrupt handler such that it waits
> > for dc_lock.
> > 
> > Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
> > Signed-off-by: Aurabindo Pillai <aurabindo.pillai@amd.com>
> > Signed-off-by: Roman Li <roman.li@amd.com>
> 
> With this patch I get sleeping function called from invalid context
> roughly every second. This occurs on at least PREEMPT_LAZY and
> PREEMPT_VOLUNTARY.
> 
...
> ...
> 
> > +static void schedule_dc_vmin_vmax(struct amdgpu_device *adev,
> > +	struct dc_stream_state *stream,
> > +	struct dc_crtc_timing_adjust *adjust)
> > +{
> > +	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_KERNEL);
> > +	if (!offload_work) {
> > +		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate vupdate_offload_work\n");
> > +		return;
> > +	}
> > +
> > +	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_KERNEL);
> > +	if (!adjust_copy) {
> > +		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate adjust_copy\n");
> > +		kfree(offload_work);
> > +		return;
> > +	}
> > +
> > +	dc_stream_retain(stream);
> > +	memcpy(adjust_copy, adjust, sizeof(*adjust_copy));
> > +
> > +	INIT_WORK(&offload_work->work, dm_handle_vmin_vmax_update);
> > +	offload_work->adev = adev;
> > +	offload_work->stream = stream;
> > +	offload_work->adjust = adjust_copy;
> > +
> > +	queue_work(system_wq, &offload_work->work);
> > +}
> > +
> 
> The allocations in this function seems to be the culprit. If I change
> them to GFP_ATOMIC the issue disappears:
> 
> diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> index 31ea57edeb45..afe0fea13bb0 100644
> --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
> @@ -550,13 +550,13 @@ static void schedule_dc_vmin_vmax(struct amdgpu_device *adev,
>  	struct dc_stream_state *stream,
>  	struct dc_crtc_timing_adjust *adjust)
>  {
> -	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_KERNEL);
> +	struct vupdate_offload_work *offload_work = kzalloc(sizeof(*offload_work), GFP_ATOMIC);
>  	if (!offload_work) {
>  		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate vupdate_offload_work\n");
>  		return;
>  	}
>  
> -	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_KERNEL);
> +	struct dc_crtc_timing_adjust *adjust_copy = kzalloc(sizeof(*adjust_copy), GFP_ATOMIC);
>  	if (!adjust_copy) {
>  		drm_dbg_driver(adev_to_drm(adev), "Failed to allocate adjust_copy\n");
>  		kfree(offload_work);
> 

Any thoughts? This is still an issue in the current next. I have also
tried using GFP_NOWAIT instead and not seen any obvious issues under
memory pressure.

Regards,
Klara Modin

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2025-08-18 13:03 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-30 18:58 [PATCH 00/12] DC Patches July 30, 2025 Roman.Li
2025-07-30 18:58 ` [PATCH 01/12] drm/amd/display: fix a Null pointer dereference vulnerability Roman.Li
2025-07-30 18:58 ` [PATCH 02/12] drm/amd/display: Adjust AUX-less ALPM setting Roman.Li
2025-07-30 18:58 ` [PATCH 03/12] drm/amd/display: fix dmub access race condition Roman.Li
2025-08-05 11:51   ` Klara Modin
2025-08-18 11:31     ` Klara Modin
2025-07-30 18:58 ` [PATCH 04/12] drm/amd/display: more liberal vmin/vmax update for freesync Roman.Li
2025-07-30 18:58 ` [PATCH 05/12] drm/amd/display: update dpp/disp clock from smu clock table Roman.Li
2025-07-30 18:58 ` [PATCH 06/12] drm/amd/display: Revert "drm/amd/display: Fix AMDGPU_MAX_BL_LEVEL value" Roman.Li
2025-07-30 18:58 ` [PATCH 07/12] drm/amd/display: Avoid Read Remote DPCD Many Times Roman.Li
2025-07-30 18:58 ` [PATCH 08/12] drm/amd/display: limited pll vco w/a v2 Roman.Li
2025-07-30 18:59 ` [PATCH 09/12] drm/amd/display: Fixing hubp programming of 3dlut fast load Roman.Li
2025-07-30 18:59 ` [PATCH 10/12] drm/amd/display: Toggle for Disable Force Pstate Allow on Disable Roman.Li
2025-07-30 18:59 ` [PATCH 11/12] drm/amd/display: Adding interface to log hw state when underflow happens Roman.Li
2025-07-30 18:59 ` [PATCH 12/12] drm/amd/display: Promote DC to 3.2.344 Roman.Li
2025-08-05 13:04 ` [PATCH 00/12] DC Patches July 30, 2025 Wheeler, Daniel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).