* [PATCH 01/19] drm/amd/display: Add allow_clock_gating to dcn42 dccg
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 02/19] drm/amd/display: bypass post csc for additional color spaces in dcn42 Chenyu Chen
` (18 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Roman Li, Chenyu Chen
From: Roman Li <Roman.Li@amd.com>
[Why]
The allow_clock_gating function is present in all other DCN versions
and is required to properly migrate DCCG registers access from hwseq
to the dccg component, resolving register conflicts.
[How]
Add the missing .allow_clock_gating function pointer to the
dccg42_funcs struct.
Signed-off-by: Roman Li <roman.li@amd.com>
Acked-by: Chenyu Chen <chen-yu.chen@amd.com>
Reviewed-by: Alex Hung <alex.hung@amd.com>
---
drivers/gpu/drm/amd/display/dc/dccg/dcn42/dcn42_dccg.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn42/dcn42_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn42/dcn42_dccg.c
index b813310763e5..9612f4498ef6 100644
--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn42/dcn42_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn42/dcn42_dccg.c
@@ -6,6 +6,7 @@
#include "core_types.h"
#include "dcn35/dcn35_dccg.h"
#include "dcn42_dccg.h"
+#include "dcn20/dcn20_dccg.h"
#define TO_DCN_DCCG(dccg)\
container_of(dccg, struct dcn_dccg, base)
@@ -306,6 +307,7 @@ static const struct dccg_funcs dccg42_funcs = {
.dccg_root_gate_disable_control = dccg35_root_gate_disable_control,
.dccg_read_reg_state = dccg31_read_reg_state,
.dccg_enable_global_fgcg = dccg42_enable_global_fgcg,
+ .allow_clock_gating = dccg2_allow_clock_gating
};
struct dccg *dccg42_create(
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 02/19] drm/amd/display: bypass post csc for additional color spaces in dcn42
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
2026-04-15 7:39 ` [PATCH 01/19] drm/amd/display: Add allow_clock_gating to dcn42 dccg Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 03/19] drm/amd/display: Remove unused dml2_project Chenyu Chen
` (17 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Roman Li, Chenyu Chen
From: Roman Li <Roman.Li@amd.com>
[Why]
This aligns dcn42 with:
"drm/amd/display: bypass post csc for additional color spaces in dal"
[How]
Apply the same post csc bypass logic to dcn42 dpp using the
helper function.
Signed-off-by: Roman Li <roman.li@amd.com>
Acked-by: Chenyu Chen <chen-yu.chen@amd.com>
Reviewed-by: Alex Hung <alex.hung@amd.com>
---
drivers/gpu/drm/amd/display/dc/dpp/dcn42/dcn42_dpp.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn42/dcn42_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn42/dcn42_dpp.c
index c126fb9d5bfa..b5d7ed5dd511 100644
--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn42/dcn42_dpp.c
+++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn42/dcn42_dpp.c
@@ -269,10 +269,10 @@ static void dpp42_dpp_setup(
tbl_entry.color_space = input_color_space;
- if (color_space >= COLOR_SPACE_YCBCR601)
- select = INPUT_CSC_SELECT_ICSC;
- else
+ if (dpp3_should_bypass_post_csc_for_colorspace(color_space))
select = INPUT_CSC_SELECT_BYPASS;
+ else
+ select = INPUT_CSC_SELECT_ICSC;
dpp3_program_post_csc(dpp_base, color_space, select,
&tbl_entry);
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 03/19] drm/amd/display: Remove unused dml2_project
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
2026-04-15 7:39 ` [PATCH 01/19] drm/amd/display: Add allow_clock_gating to dcn42 dccg Chenyu Chen
2026-04-15 7:39 ` [PATCH 02/19] drm/amd/display: bypass post csc for additional color spaces in dcn42 Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 04/19] drm/amd/display: Unset Replay desync error verification by default Chenyu Chen
` (16 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Roman Li, Chenyu Chen
From: Roman Li <Roman.Li@amd.com>
Remove all references to dml2_project_dcn40 from dml2.
The project is not used.
Signed-off-by: Roman Li <roman.li@amd.com>
Acked-by: Chenyu Chen <chen-yu.chen@amd.com>
Reviewed-by: Alex Hung <alex.hung@amd.com>
---
.../gpu/drm/amd/display/dc/dml2_0/dml21/inc/dml_top_types.h | 1 -
.../display/dc/dml2_0/dml21/src/dml2_core/dml2_core_factory.c | 1 -
.../display/dc/dml2_0/dml21/src/dml2_dpmm/dml2_dpmm_factory.c | 1 -
.../display/dc/dml2_0/dml21/src/dml2_mcg/dml2_mcg_factory.c | 1 -
.../display/dc/dml2_0/dml21/src/dml2_pmo/dml2_pmo_factory.c | 3 +--
.../display/dc/dml2_0/dml21/src/dml2_top/dml2_top_interfaces.c | 1 -
6 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/inc/dml_top_types.h b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/inc/dml_top_types.h
index 98b26116cdc1..dff903a103db 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/inc/dml_top_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/inc/dml_top_types.h
@@ -19,7 +19,6 @@ enum dml2_project_id {
dml2_project_dcn4x_stage1,
dml2_project_dcn4x_stage2,
dml2_project_dcn4x_stage2_auto_drr_svp,
- dml2_project_dcn40,
dml2_project_dcn42,
};
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_core/dml2_core_factory.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_core/dml2_core_factory.c
index 6cad99c21139..67e307fa4310 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_core/dml2_core_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_core/dml2_core_factory.c
@@ -21,7 +21,6 @@ bool dml2_core_create(enum dml2_project_id project_id, struct dml2_core_instance
case dml2_project_dcn4x_stage1:
result = false;
break;
- case dml2_project_dcn40:
case dml2_project_dcn4x_stage2:
case dml2_project_dcn4x_stage2_auto_drr_svp:
out->initialize = &core_dcn4_initialize;
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_dpmm/dml2_dpmm_factory.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_dpmm/dml2_dpmm_factory.c
index 39965ff2e111..be0517e10104 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_dpmm/dml2_dpmm_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_dpmm/dml2_dpmm_factory.c
@@ -33,7 +33,6 @@ bool dml2_dpmm_create(enum dml2_project_id project_id, struct dml2_dpmm_instance
out->map_watermarks = &dummy_map_watermarks;
result = true;
break;
- case dml2_project_dcn40:
case dml2_project_dcn4x_stage2:
out->map_mode_to_soc_dpm = &dpmm_dcn3_map_mode_to_soc_dpm;
out->map_watermarks = &dummy_map_watermarks;
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_mcg/dml2_mcg_factory.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_mcg/dml2_mcg_factory.c
index fb0b0ac547c7..270283332cc1 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_mcg/dml2_mcg_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_mcg/dml2_mcg_factory.c
@@ -27,7 +27,6 @@ bool dml2_mcg_create(enum dml2_project_id project_id, struct dml2_mcg_instance *
out->build_min_clock_table = &dummy_build_min_clock_table;
result = true;
break;
- case dml2_project_dcn40:
case dml2_project_dcn4x_stage2:
case dml2_project_dcn4x_stage2_auto_drr_svp:
out->build_min_clock_table = &mcg_dcn4_build_min_clock_table;
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_pmo/dml2_pmo_factory.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_pmo/dml2_pmo_factory.c
index 83802aac11cd..af2ba7d08a61 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_pmo/dml2_pmo_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_pmo/dml2_pmo_factory.c
@@ -3,8 +3,8 @@
// Copyright 2024 Advanced Micro Devices, Inc.
#include "dml2_pmo_factory.h"
-#include "dml2_pmo_dcn3.h"
#include "dml2_pmo_dcn4_fams2.h"
+#include "dml2_pmo_dcn3.h"
#include "dml2_external_lib_deps.h"
static bool dummy_init_for_stutter(struct dml2_pmo_init_for_stutter_in_out *in_out)
@@ -40,7 +40,6 @@ bool dml2_pmo_create(enum dml2_project_id project_id, struct dml2_pmo_instance *
out->optimize_dcc_mcache = pmo_dcn4_fams2_optimize_dcc_mcache;
result = true;
break;
- case dml2_project_dcn40:
case dml2_project_dcn4x_stage2:
out->initialize = pmo_dcn3_initialize;
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_top/dml2_top_interfaces.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_top/dml2_top_interfaces.c
index a6c5031f69c1..04860b6790df 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_top/dml2_top_interfaces.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/src/dml2_top/dml2_top_interfaces.c
@@ -17,7 +17,6 @@ bool dml2_initialize_instance(struct dml2_initialize_instance_in_out *in_out)
case dml2_project_dcn4x_stage1:
case dml2_project_dcn4x_stage2:
case dml2_project_dcn4x_stage2_auto_drr_svp:
- case dml2_project_dcn40:
case dml2_project_dcn42:
return dml2_top_soc15_initialize_instance(in_out);
case dml2_project_invalid:
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 04/19] drm/amd/display: Unset Replay desync error verification by default
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (2 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 03/19] drm/amd/display: Remove unused dml2_project Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 05/19] drm/amd/display: Align HWSS fast commit path with legacy path Chenyu Chen
` (15 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Allen Li, Robin Chen, Allen Li,
Chenyu Chen
From: Allen Li <Allen.Li@amd.com>
[Why & How]
There will be an unexpected desync error while doing PSR -> Replay transit,
so we want to disable the replay desync error detection by default.
Reviewed-by: Robin Chen <robin.chen@amd.com>
Signed-off-by: Allen Li <allen.li@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../drm/amd/display/dc/link/protocols/link_edp_panel_control.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
index 4a2699a374b7..4ae739dd9c7e 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
@@ -1050,8 +1050,7 @@ bool edp_setup_freesync_replay(struct dc_link *link, const struct dc_stream_stat
if (link->replay_settings.replay_feature_enabled) {
replay_config.bits.FREESYNC_PANEL_REPLAY_MODE = 1;
- replay_config.bits.TIMING_DESYNC_ERROR_VERIFICATION =
- link->replay_settings.config.replay_timing_sync_supported;
+ replay_config.bits.TIMING_DESYNC_ERROR_VERIFICATION = 0;
replay_config.bits.STATE_TRANSITION_ERROR_DETECTION = 1;
dm_helpers_dp_write_dpcd(link->ctx, link,
DP_SINK_PR_ENABLE_AND_CONFIGURATION,
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 05/19] drm/amd/display: Align HWSS fast commit path with legacy path
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (3 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 04/19] drm/amd/display: Unset Replay desync error verification by default Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 06/19] drm/amd/display: Fix implicit narrowing conversion warnings Chenyu Chen
` (14 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Rafal Ostrowski, Alvin Lee, Chenyu Chen
From: Rafal Ostrowski <rafal.ostrowski@amd.com>
Add missing operations to commit_planes_for_stream_fast and
hwss_build_fast_sequence to match the legacy commit_planes_for_stream
behavior for UPDATE_TYPE_FAST updates.
- Add stream-level fast update flags (cursor_attr, cursor_pos,
periodic_interrupt, info_frame, dmdata, dither) to dc_stream.h
- Add stream-level fields to dc_fast_update struct for fast/full
update classification in populate_fast_updates/fast_updates_exist
- Add HWSS_SETUP_PERIODIC_INTERRUPT block sequence entry, delegating
to dc->hwss.setup_periodic_interrupt instead of calling dcn10
directly
- Add HUBP_ENABLE_3DLUT_FL block for 3DLUT FL with
should_update_pipe_for_stream/plane guards
- Add DPP_SET_CURSOR_MATRIX block with new cursor_csc_change flag
- Widen DPP_PROGRAM_GAMUT_REMAP to also trigger on stream gamut_remap
- Add info frame, dmdata, dither, and cursor blocks to
hwss_build_fast_sequence
- Reclassify cursor_position/cursor_attributes as UPDATE_TYPE_FAST
- Extract dc_dmdata_types.h to resolve circular include between
hw_sequencer.h and dc_stream.h
- Remove dcn10_hwseq.h include from dc_hw_sequencer.c
Reviewed-by: Alvin Lee <alvin.lee2@amd.com>
Signed-off-by: Rafal Ostrowski <rafal.ostrowski@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
drivers/gpu/drm/amd/display/dc/core/dc.c | 117 ++++-
.../drm/amd/display/dc/core/dc_hw_sequencer.c | 475 +++++++++++++++++-
drivers/gpu/drm/amd/display/dc/dc.h | 15 +
drivers/gpu/drm/amd/display/dc/dc_stream.h | 35 +-
drivers/gpu/drm/amd/display/dc/dc_types.h | 30 ++
.../drm/amd/display/dc/hwss/hw_sequencer.h | 138 +++++
6 files changed, 765 insertions(+), 45 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 63f51c69919b..534f770949d5 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -2947,6 +2947,11 @@ static struct surface_update_descriptor det_surface_update(
elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
}
+ if (u->cursor_csc_color_matrix) {
+ update_flags->bits.cursor_csc_color_matrix_change = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
if (u->coeff_reduction_factor) {
update_flags->bits.coeff_reduction_change = 1;
elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
@@ -3074,7 +3079,7 @@ static struct surface_update_descriptor check_update_surfaces_for_stream(
elevate_update_type(&overall_type, UPDATE_TYPE_FULL, LOCK_DESCRIPTOR_GLOBAL | LOCK_DESCRIPTOR_LINK);
}
- if (stream_update->gamut_remap)
+ if (check_config->enable_legacy_fast_update && stream_update->gamut_remap)
su_flags->bits.gamut_remap = 1;
if (stream_update->wb_update)
@@ -3105,6 +3110,29 @@ static struct surface_update_descriptor check_update_surfaces_for_stream(
elevate_update_type(&overall_type, UPDATE_TYPE_FULL, LOCK_DESCRIPTOR_GLOBAL);
// Non-global cases
+
+ if (stream_update->gamut_remap) {
+ su_flags->bits.gamut_remap = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
+ if ((stream_update->hdr_static_metadata && !stream_update->stream->use_dynamic_meta) ||
+ stream_update->vrr_infopacket ||
+ stream_update->vsc_infopacket ||
+ stream_update->vsp_infopacket ||
+ stream_update->hfvsif_infopacket ||
+ stream_update->adaptive_sync_infopacket ||
+ stream_update->vtem_infopacket ||
+ stream_update->avi_infopacket) {
+ su_flags->bits.info_frame = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
+ if (stream_update->hdr_static_metadata && stream_update->stream->use_dynamic_meta) {
+ su_flags->bits.dmdata = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
if (stream_update->output_csc_transform) {
su_flags->bits.out_csc = 1;
elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
@@ -3114,6 +3142,26 @@ static struct surface_update_descriptor check_update_surfaces_for_stream(
su_flags->bits.out_tf = 1;
elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
}
+
+ if (stream_update->periodic_interrupt) {
+ su_flags->bits.periodic_interrupt = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
+ if (stream_update->dither_option) {
+ su_flags->bits.dither = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
+ if (stream_update->cursor_attributes) {
+ su_flags->bits.cursor_attr = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
+
+ if (stream_update->cursor_position) {
+ su_flags->bits.cursor_pos = 1;
+ elevate_update_type(&overall_type, UPDATE_TYPE_FAST, LOCK_DESCRIPTOR_STREAM);
+ }
}
for (int i = 0 ; i < surface_count; i++) {
@@ -5110,9 +5158,35 @@ void populate_fast_updates(struct dc_fast_update *fast_update,
if (stream_update) {
fast_update[0].out_transfer_func = stream_update->out_transfer_func;
fast_update[0].output_csc_transform = stream_update->output_csc_transform;
+ fast_update[0].cursor_attributes = stream_update->cursor_attributes;
+ fast_update[0].cursor_position = stream_update->cursor_position;
+ fast_update[0].periodic_interrupt = stream_update->periodic_interrupt;
+ fast_update[0].dither_option = stream_update->dither_option;
+ fast_update[0].gamut_remap = stream_update->gamut_remap;
+ fast_update[0].vrr_infopacket = stream_update->vrr_infopacket;
+ fast_update[0].vsc_infopacket = stream_update->vsc_infopacket;
+ fast_update[0].vsp_infopacket = stream_update->vsp_infopacket;
+ fast_update[0].hfvsif_infopacket = stream_update->hfvsif_infopacket;
+ fast_update[0].vtem_infopacket = stream_update->vtem_infopacket;
+ fast_update[0].adaptive_sync_infopacket = stream_update->adaptive_sync_infopacket;
+ fast_update[0].avi_infopacket = stream_update->avi_infopacket;
+ fast_update[0].hdr_static_metadata = stream_update->hdr_static_metadata;
} else {
fast_update[0].out_transfer_func = NULL;
fast_update[0].output_csc_transform = NULL;
+ fast_update[0].cursor_attributes = NULL;
+ fast_update[0].cursor_position = NULL;
+ fast_update[0].periodic_interrupt = NULL;
+ fast_update[0].dither_option = NULL;
+ fast_update[0].gamut_remap = NULL;
+ fast_update[0].vrr_infopacket = NULL;
+ fast_update[0].vsc_infopacket = NULL;
+ fast_update[0].vsp_infopacket = NULL;
+ fast_update[0].hfvsif_infopacket = NULL;
+ fast_update[0].vtem_infopacket = NULL;
+ fast_update[0].adaptive_sync_infopacket = NULL;
+ fast_update[0].avi_infopacket = NULL;
+ fast_update[0].hdr_static_metadata = NULL;
}
for (i = 0; i < surface_count; i++) {
@@ -5133,7 +5207,20 @@ static bool fast_updates_exist(const struct dc_fast_update *fast_update, int sur
int i;
if (fast_update[0].out_transfer_func ||
- fast_update[0].output_csc_transform)
+ fast_update[0].output_csc_transform ||
+ fast_update[0].gamut_remap ||
+ fast_update[0].cursor_attributes ||
+ fast_update[0].cursor_position ||
+ fast_update[0].periodic_interrupt ||
+ fast_update[0].dither_option ||
+ fast_update[0].vrr_infopacket ||
+ fast_update[0].vsc_infopacket ||
+ fast_update[0].vsp_infopacket ||
+ fast_update[0].hfvsif_infopacket ||
+ fast_update[0].vtem_infopacket ||
+ fast_update[0].adaptive_sync_infopacket ||
+ fast_update[0].avi_infopacket ||
+ fast_update[0].hdr_static_metadata)
return true;
for (i = 0; i < surface_count; i++) {
@@ -5157,7 +5244,20 @@ bool fast_nonaddr_updates_exist(struct dc_fast_update *fast_update, int surface_
int i;
if (fast_update[0].out_transfer_func ||
- fast_update[0].output_csc_transform)
+ fast_update[0].output_csc_transform ||
+ fast_update[0].cursor_attributes ||
+ fast_update[0].cursor_position ||
+ fast_update[0].periodic_interrupt ||
+ fast_update[0].dither_option ||
+ fast_update[0].gamut_remap ||
+ fast_update[0].vrr_infopacket ||
+ fast_update[0].vsc_infopacket ||
+ fast_update[0].vsp_infopacket ||
+ fast_update[0].hfvsif_infopacket ||
+ fast_update[0].vtem_infopacket ||
+ fast_update[0].adaptive_sync_infopacket ||
+ fast_update[0].avi_infopacket ||
+ fast_update[0].hdr_static_metadata)
return true;
for (i = 0; i < surface_count; i++) {
@@ -5241,23 +5341,12 @@ static bool full_update_required(
(((stream_update->src.height != 0 && stream_update->src.width != 0) ||
(stream_update->dst.height != 0 && stream_update->dst.width != 0) ||
stream_update->integer_scaling_update) ||
- stream_update->hdr_static_metadata ||
stream_update->abm_level ||
- stream_update->periodic_interrupt ||
- stream_update->vrr_infopacket ||
- stream_update->vsc_infopacket ||
- stream_update->vsp_infopacket ||
- stream_update->hfvsif_infopacket ||
- stream_update->vtem_infopacket ||
- stream_update->adaptive_sync_infopacket ||
- stream_update->avi_infopacket ||
stream_update->dpms_off ||
stream_update->allow_freesync ||
stream_update->vrr_active_variable ||
stream_update->vrr_active_fixed ||
- stream_update->gamut_remap ||
stream_update->output_color_space ||
- stream_update->dither_option ||
stream_update->wb_update ||
stream_update->dsc_config ||
stream_update->mst_bw_update ||
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
index 7333f5905330..f8a6916bbd4d 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
@@ -39,6 +39,8 @@
#include "abm.h"
#include "dcn10/dcn10_hubbub.h"
#include "dce/dmub_hw_lock_mgr.h"
+#include "custom_float.h"
+#include "link_service.h"
#define MAX_NUM_MCACHE 8
@@ -789,8 +791,242 @@ void hwss_build_fast_sequence(struct dc *dc,
(*num_steps)++;
}
+ if (dc->hwss.setup_periodic_interrupt && stream->update_flags.bits.periodic_interrupt) {
+ block_sequence[*num_steps].params.setup_periodic_interrupt_params.dc = dc;
+ block_sequence[*num_steps].params.setup_periodic_interrupt_params.pipe_ctx = pipe_ctx;
+ block_sequence[*num_steps].func = HWSS_SETUP_PERIODIC_INTERRUPT;
+ (*num_steps)++;
+ }
+
+ if (stream->update_flags.bits.info_frame) {
+ resource_build_info_frame(pipe_ctx);
+ block_sequence[*num_steps].params.update_info_frame_params.dc = dc;
+ block_sequence[*num_steps].params.update_info_frame_params.pipe_ctx = pipe_ctx;
+ block_sequence[*num_steps].func = HWSS_UPDATE_INFO_FRAME;
+ (*num_steps)++;
+
+ if (dc_is_dp_signal(pipe_ctx->stream->signal)) {
+ block_sequence[*num_steps].params.dp_trace_source_sequence_params.dc = dc;
+ block_sequence[*num_steps].params.dp_trace_source_sequence_params.link = pipe_ctx->stream->link;
+ block_sequence[*num_steps].params.dp_trace_source_sequence_params.dp_test_mode = DPCD_SOURCE_SEQ_AFTER_UPDATE_INFO_FRAME;
+ block_sequence[*num_steps].func = DP_TRACE_SOURCE_SEQUENCE;
+ (*num_steps)++;
+ }
+ }
+
+ if (dc->hwss.set_dmdata_attributes && stream->update_flags.bits.dmdata &&
+ stream->use_dynamic_meta && pipe_ctx->stream->dmdata_address.quad_part != 0) {
+ struct dc_dmdata_attributes attr = { 0 };
+
+ attr.dmdata_mode = DMDATA_HW_MODE;
+ attr.dmdata_size = dc_is_hdmi_signal(pipe_ctx->stream->signal) ? 32 : 36;
+ attr.address.quad_part = pipe_ctx->stream->dmdata_address.quad_part;
+ attr.dmdata_dl_delta = 0;
+ attr.dmdata_qos_mode = 0;
+ attr.dmdata_qos_level = 0;
+ attr.dmdata_repeat = 1; /* always repeat */
+ attr.dmdata_updated = 1;
+ attr.dmdata_sw_data = NULL;
+
+ block_sequence[*num_steps].params.set_dmdata_attributes_params.hubp = pipe_ctx->plane_res.hubp;
+ block_sequence[*num_steps].params.set_dmdata_attributes_params.attr = attr;
+ block_sequence[*num_steps].func = HUBP_SET_DMDATA_ATTRIBUTES;
+ (*num_steps)++;
+ }
+
+ /* Track cursor lock state - separate locks for attribute and position updates */
+ bool enable_cursor_offload = false;
+
+ if ((dc->hwss.set_cursor_attribute && stream->update_flags.bits.cursor_attr) ||
+ (dc->hwss.set_cursor_position && stream->update_flags.bits.cursor_pos))
+ enable_cursor_offload = dc_dmub_srv_is_cursor_offload_enabled(dc);
+
+ /* Cursor attribute updates - separate lock/iterate/unlock */
+ if (dc->hwss.set_cursor_attribute && stream->update_flags.bits.cursor_attr) {
+ struct pipe_ctx *cursor_pipe_to_program = NULL;
+
+ for (i = 0; i < MAX_PIPES; i++) {
+ current_pipe = &context->res_ctx.pipe_ctx[i];
+
+ if (current_pipe->stream != stream)
+ continue;
+
+ if (!cursor_pipe_to_program) {
+ cursor_pipe_to_program = current_pipe;
+
+ if (enable_cursor_offload && dc->hwss.begin_cursor_offload_update) {
+ block_sequence[*num_steps].params.begin_cursor_offload_update_params.dc = dc;
+ block_sequence[*num_steps].params.begin_cursor_offload_update_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].func = HWSS_BEGIN_CURSOR_OFFLOAD_UPDATE;
+ (*num_steps)++;
+ } else {
+ block_sequence[*num_steps].params.cursor_lock_params.dc = dc;
+ block_sequence[*num_steps].params.cursor_lock_params.pipe_ctx = current_pipe;
+ block_sequence[*num_steps].params.cursor_lock_params.lock = true;
+ block_sequence[*num_steps].func = HWSS_CURSOR_LOCK;
+ (*num_steps)++;
+
+ if (current_pipe->next_odm_pipe) {
+ block_sequence[*num_steps].params.cursor_lock_params.dc = dc;
+ block_sequence[*num_steps].params.cursor_lock_params.pipe_ctx =
+ current_pipe->next_odm_pipe;
+ block_sequence[*num_steps].params.cursor_lock_params.lock = true;
+ block_sequence[*num_steps].func = HWSS_CURSOR_LOCK;
+ (*num_steps)++;
+ }
+ }
+ }
+
+ block_sequence[*num_steps].params.hubp_set_cursor_attributes_params.hubp =
+ current_pipe->plane_res.hubp;
+ block_sequence[*num_steps].params.hubp_set_cursor_attributes_params.attributes =
+ ¤t_pipe->stream->cursor_attributes;
+ block_sequence[*num_steps].func = HUBP_SET_CURSOR_ATTRIBUTES;
+ (*num_steps)++;
+
+ block_sequence[*num_steps].params.dpp_set_cursor_attributes_params.dpp =
+ current_pipe->plane_res.dpp;
+ block_sequence[*num_steps].params.dpp_set_cursor_attributes_params.attributes =
+ ¤t_pipe->stream->cursor_attributes;
+ block_sequence[*num_steps].func = DPP_SET_CURSOR_ATTRIBUTES;
+ (*num_steps)++;
+
+ if (dc->ctx->dmub_srv) {
+ block_sequence[*num_steps].params.send_cursor_info_to_dmu_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].params.send_cursor_info_to_dmu_params.pipe_idx =
+ current_pipe->pipe_idx;
+ block_sequence[*num_steps].func = DC_SEND_CURSOR_INFO_TO_DMU;
+ (*num_steps)++;
+ }
+
+ block_sequence[*num_steps].params.set_cursor_sdr_white_level_params.dc = dc;
+ block_sequence[*num_steps].params.set_cursor_sdr_white_level_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].func = SET_CURSOR_SDR_WHITE_LEVEL;
+ (*num_steps)++;
+
+ if (enable_cursor_offload && dc->hwss.update_cursor_offload_pipe) {
+ block_sequence[*num_steps].params.update_cursor_offload_pipe_params.dc = dc;
+ block_sequence[*num_steps].params.update_cursor_offload_pipe_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].func = HWSS_UPDATE_CURSOR_OFFLOAD_PIPE;
+ (*num_steps)++;
+ }
+ }
+
+ /* Unlock cursor attributes after all pipes have been programmed */
+ if (cursor_pipe_to_program) {
+ if (enable_cursor_offload && dc->hwss.commit_cursor_offload_update) {
+ block_sequence[*num_steps].params.commit_cursor_offload_update_params.dc = dc;
+ block_sequence[*num_steps].params.commit_cursor_offload_update_params.pipe_ctx =
+ cursor_pipe_to_program;
+ block_sequence[*num_steps].func = HWSS_COMMIT_CURSOR_OFFLOAD_UPDATE;
+ (*num_steps)++;
+ } else {
+ block_sequence[*num_steps].params.cursor_lock_params.dc = dc;
+ block_sequence[*num_steps].params.cursor_lock_params.pipe_ctx = cursor_pipe_to_program;
+ block_sequence[*num_steps].params.cursor_lock_params.lock = false;
+ block_sequence[*num_steps].func = HWSS_CURSOR_LOCK;
+ (*num_steps)++;
+
+ if (cursor_pipe_to_program->next_odm_pipe) {
+ block_sequence[*num_steps].params.cursor_lock_params.dc = dc;
+ block_sequence[*num_steps].params.cursor_lock_params.pipe_ctx =
+ cursor_pipe_to_program->next_odm_pipe;
+ block_sequence[*num_steps].params.cursor_lock_params.lock = false;
+ block_sequence[*num_steps].func = HWSS_CURSOR_LOCK;
+ (*num_steps)++;
+ }
+ }
+ }
+ }
+
+ /* Cursor position updates */
+ if (dc->hwss.set_cursor_position && stream->update_flags.bits.cursor_pos) {
+ struct pipe_ctx *cursor_pipe_to_program = NULL;
+
+ for (i = 0; i < MAX_PIPES; i++) {
+ current_pipe = &context->res_ctx.pipe_ctx[i];
+
+ if (current_pipe->stream != stream ||
+ (!current_pipe->plane_res.mi && !current_pipe->plane_res.hubp) ||
+ !current_pipe->plane_state ||
+ (!current_pipe->plane_res.xfm && !current_pipe->plane_res.dpp) ||
+ (!current_pipe->plane_res.ipp && !current_pipe->plane_res.dpp))
+ continue;
+
+ if (!cursor_pipe_to_program) {
+ cursor_pipe_to_program = current_pipe;
+
+ if (enable_cursor_offload && dc->hwss.begin_cursor_offload_update) {
+ block_sequence[*num_steps].params.begin_cursor_offload_update_params.dc = dc;
+ block_sequence[*num_steps].params.begin_cursor_offload_update_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].func = HWSS_BEGIN_CURSOR_OFFLOAD_UPDATE;
+ (*num_steps)++;
+ } else {
+ block_sequence[*num_steps].params.cursor_lock_params.dc = dc;
+ block_sequence[*num_steps].params.cursor_lock_params.pipe_ctx = current_pipe;
+ block_sequence[*num_steps].params.cursor_lock_params.lock = true;
+ block_sequence[*num_steps].func = HWSS_CURSOR_LOCK;
+ (*num_steps)++;
+ }
+ }
+
+ block_sequence[*num_steps].params.set_cursor_position_params.dc = dc;
+ block_sequence[*num_steps].params.set_cursor_position_params.pipe_ctx = current_pipe;
+ block_sequence[*num_steps].func = SET_CURSOR_POSITION;
+ (*num_steps)++;
+
+ if (enable_cursor_offload && dc->hwss.update_cursor_offload_pipe) {
+ block_sequence[*num_steps].params.update_cursor_offload_pipe_params.dc = dc;
+ block_sequence[*num_steps].params.update_cursor_offload_pipe_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].func = HWSS_UPDATE_CURSOR_OFFLOAD_PIPE;
+ (*num_steps)++;
+ }
+
+ if (dc->ctx->dmub_srv) {
+ block_sequence[*num_steps].params.send_cursor_info_to_dmu_params.pipe_ctx =
+ current_pipe;
+ block_sequence[*num_steps].params.send_cursor_info_to_dmu_params.pipe_idx =
+ current_pipe->pipe_idx;
+ block_sequence[*num_steps].func = DC_SEND_CURSOR_INFO_TO_DMU;
+ (*num_steps)++;
+ }
+ }
+
+ /* Unlock cursor position after all pipes have been programmed */
+ if (cursor_pipe_to_program) {
+ if (enable_cursor_offload && dc->hwss.commit_cursor_offload_update) {
+ block_sequence[*num_steps].params.commit_cursor_offload_update_params.dc = dc;
+ block_sequence[*num_steps].params.commit_cursor_offload_update_params.pipe_ctx =
+ cursor_pipe_to_program;
+ block_sequence[*num_steps].func = HWSS_COMMIT_CURSOR_OFFLOAD_UPDATE;
+ (*num_steps)++;
+ } else {
+ block_sequence[*num_steps].params.cursor_lock_params.dc = dc;
+ block_sequence[*num_steps].params.cursor_lock_params.pipe_ctx = cursor_pipe_to_program;
+ block_sequence[*num_steps].params.cursor_lock_params.lock = false;
+ block_sequence[*num_steps].func = HWSS_CURSOR_LOCK;
+ (*num_steps)++;
+ }
+ }
+ }
+
current_pipe = pipe_ctx;
while (current_pipe) {
+ if (current_pipe->stream->update_flags.bits.dither) {
+ resource_build_bit_depth_reduction_params(current_pipe->stream, ¤t_pipe->stream->bit_depth_params);
+ block_sequence[*num_steps].params.opp_program_fmt_params.opp = current_pipe->stream_res.opp;
+ block_sequence[*num_steps].params.opp_program_fmt_params.fmt_bit_depth = ¤t_pipe->stream->bit_depth_params;
+ block_sequence[*num_steps].params.opp_program_fmt_params.clamping = ¤t_pipe->stream->clamping;
+ block_sequence[*num_steps].func = OPP_PROGRAM_FMT;
+ (*num_steps)++;
+ }
+
current_mpc_pipe = current_pipe;
while (current_mpc_pipe) {
if (current_mpc_pipe->plane_state) {
@@ -831,7 +1067,9 @@ void hwss_build_fast_sequence(struct dc *dc,
(*num_steps)++;
}
- if (dc->hwss.program_gamut_remap && current_mpc_pipe->plane_state->update_flags.bits.gamut_remap_change) {
+ if (dc->hwss.program_gamut_remap &&
+ (current_mpc_pipe->plane_state->update_flags.bits.gamut_remap_change ||
+ current_mpc_pipe->stream->update_flags.bits.gamut_remap)) {
block_sequence[*num_steps].params.program_gamut_remap_params.pipe_ctx = current_mpc_pipe;
block_sequence[*num_steps].func = DPP_PROGRAM_GAMUT_REMAP;
(*num_steps)++;
@@ -856,6 +1094,16 @@ void hwss_build_fast_sequence(struct dc *dc,
block_sequence[*num_steps].func = DPP_PROGRAM_CM_HIST;
(*num_steps)++;
}
+
+ if (current_mpc_pipe->plane_res.dpp &&
+ current_mpc_pipe->plane_res.dpp->funcs->set_cursor_matrix &&
+ current_mpc_pipe->plane_state->update_flags.bits.cursor_csc_color_matrix_change) {
+ block_sequence[*num_steps].params.dpp_set_cursor_matrix_params.dpp = current_mpc_pipe->plane_res.dpp;
+ block_sequence[*num_steps].params.dpp_set_cursor_matrix_params.color_space = current_mpc_pipe->plane_state->color_space;
+ block_sequence[*num_steps].params.dpp_set_cursor_matrix_params.cursor_csc_color_matrix = ¤t_mpc_pipe->plane_state->cursor_csc_color_matrix;
+ block_sequence[*num_steps].func = DPP_SET_CURSOR_MATRIX;
+ (*num_steps)++;
+ }
}
if (hws->funcs.set_output_transfer_func && current_mpc_pipe->stream->update_flags.bits.out_tf) {
block_sequence[*num_steps].params.set_output_transfer_func_params.dc = dc;
@@ -989,7 +1237,25 @@ void hwss_execute_sequence(struct dc *dc,
params->set_input_transfer_func_params.plane_state);
break;
case DPP_PROGRAM_GAMUT_REMAP:
- dc->hwss.program_gamut_remap(params->program_gamut_remap_params.pipe_ctx);
+ hwss_program_gamut_remap(params);
+ break;
+ case HUBP_ENABLE_3DLUT_FL:
+ hwss_hubp_enable_3dlut_fl(params);
+ break;
+ case OTG_SETUP_VERTICAL_INTERRUPT:
+ hwss_tg_setup_vertical_interrupt0(params);
+ break;
+ case HWSS_SETUP_PERIODIC_INTERRUPT:
+ hwss_setup_periodic_interrupt(dc, params);
+ break;
+ case HWSS_UPDATE_INFO_FRAME:
+ hwss_update_info_frame(dc, params);
+ break;
+ case DP_TRACE_SOURCE_SEQUENCE:
+ hwss_dp_trace_source_sequence(params);
+ break;
+ case HUBP_SET_DMDATA_ATTRIBUTES:
+ hwss_set_dmdata_attributes(params);
break;
case DPP_SETUP_DPP:
hwss_setup_dpp(params);
@@ -1311,9 +1577,30 @@ void hwss_execute_sequence(struct dc *dc,
case ABORT_CURSOR_OFFLOAD_UPDATE:
hwss_abort_cursor_offload_update(params);
break;
+ case HWSS_CURSOR_LOCK:
+ hwss_cursor_lock(params);
+ break;
+ case HWSS_BEGIN_CURSOR_OFFLOAD_UPDATE:
+ hwss_begin_cursor_offload_update(params);
+ break;
+ case HWSS_COMMIT_CURSOR_OFFLOAD_UPDATE:
+ hwss_commit_cursor_offload_update(params);
+ break;
+ case HWSS_UPDATE_CURSOR_OFFLOAD_PIPE:
+ hwss_update_cursor_offload_pipe(params);
+ break;
+ case DC_SEND_CURSOR_INFO_TO_DMU:
+ hwss_send_cursor_info_to_dmu(params);
+ break;
case SET_CURSOR_ATTRIBUTE:
hwss_set_cursor_attribute(params);
break;
+ case HUBP_SET_CURSOR_ATTRIBUTES:
+ hwss_hubp_set_cursor_attributes(params);
+ break;
+ case DPP_SET_CURSOR_ATTRIBUTES:
+ hwss_dpp_set_cursor_attributes(params);
+ break;
case SET_CURSOR_POSITION:
hwss_set_cursor_position(params);
break;
@@ -1732,6 +2019,21 @@ void hwss_add_tg_set_vtg_params(struct block_sequence_state *seq_state,
}
}
+/*
+ * Helper function to add OTG setup vertical interrupt0 to block sequence
+ */
+void hwss_add_vertical_interrupt_setup(struct block_sequence_state *seq_state,
+ struct timing_generator *tg, uint32_t start_line, uint32_t end_line)
+{
+ if (*seq_state->num_steps < MAX_HWSS_BLOCK_SEQUENCE_SIZE) {
+ seq_state->steps[*seq_state->num_steps].params.tg_setup_vertical_interrupt0_params.tg = tg;
+ seq_state->steps[*seq_state->num_steps].params.tg_setup_vertical_interrupt0_params.start_line = start_line;
+ seq_state->steps[*seq_state->num_steps].params.tg_setup_vertical_interrupt0_params.end_line = end_line;
+ seq_state->steps[*seq_state->num_steps].func = OTG_SETUP_VERTICAL_INTERRUPT;
+ (*seq_state->num_steps)++;
+ }
+}
+
/*
* Helper function to add TG setup vertical interrupt2 to block sequence
*/
@@ -1947,6 +2249,27 @@ void hwss_add_tg_wait_double_buffer_pending(struct block_sequence_state *seq_sta
}
}
+void hwss_add_hubp_enable_3dlut_fl(struct block_sequence_state *seq_state,
+ struct hubp *hubp)
+{
+ if (*seq_state->num_steps < MAX_HWSS_BLOCK_SEQUENCE_SIZE) {
+ seq_state->steps[*seq_state->num_steps].params.hubp_enable_3dlut_fl_params.hubp = hubp;
+ seq_state->steps[*seq_state->num_steps].func = HUBP_ENABLE_3DLUT_FL;
+ (*seq_state->num_steps)++;
+ }
+}
+
+void hwss_add_set_dmdata_attributes(struct block_sequence_state *seq_state,
+ struct hubp *hubp, struct dc_dmdata_attributes *attr)
+{
+ if (*seq_state->num_steps < MAX_HWSS_BLOCK_SEQUENCE_SIZE) {
+ seq_state->steps[*seq_state->num_steps].params.set_dmdata_attributes_params.hubp = hubp;
+ seq_state->steps[*seq_state->num_steps].params.set_dmdata_attributes_params.attr = *attr;
+ seq_state->steps[*seq_state->num_steps].func = HUBP_SET_DMDATA_ATTRIBUTES;
+ (*seq_state->num_steps)++;
+ }
+}
+
void hwss_program_manual_trigger(union block_sequence_params *params)
{
struct pipe_ctx *pipe_ctx = params->program_manual_trigger_params.pipe_ctx;
@@ -2416,6 +2739,59 @@ void hwss_tg_set_vtg_params(union block_sequence_params *params)
tg->funcs->set_vtg_params(tg, timing, program_fp2);
}
+void hwss_hubp_enable_3dlut_fl(union block_sequence_params *params)
+{
+ struct hubp *hubp = params->hubp_enable_3dlut_fl_params.hubp;
+
+ if (hubp->funcs->hubp_enable_3dlut_fl)
+ hubp->funcs->hubp_enable_3dlut_fl(hubp, true);
+}
+
+void hwss_update_info_frame(struct dc *dc, union block_sequence_params *params)
+{
+ struct pipe_ctx *pipe_ctx = params->update_info_frame_params.pipe_ctx;
+
+ if (dc->hwss.update_info_frame)
+ dc->hwss.update_info_frame(pipe_ctx);
+}
+
+void hwss_setup_periodic_interrupt(struct dc *dc, union block_sequence_params *params)
+{
+ struct pipe_ctx *pipe_ctx = params->setup_periodic_interrupt_params.pipe_ctx;
+
+ if (dc->hwss.setup_periodic_interrupt)
+ dc->hwss.setup_periodic_interrupt(dc, pipe_ctx);
+}
+
+void hwss_dp_trace_source_sequence(union block_sequence_params *params)
+{
+ struct dc *dc = params->dp_trace_source_sequence_params.dc;
+ struct dc_link *link = params->dp_trace_source_sequence_params.link;
+ uint8_t dp_test_mode = params->dp_trace_source_sequence_params.dp_test_mode;
+
+ if (dc->link_srv->dp_trace_source_sequence)
+ dc->link_srv->dp_trace_source_sequence(link, dp_test_mode);
+}
+
+void hwss_set_dmdata_attributes(union block_sequence_params *params)
+{
+ struct hubp *hubp = params->set_dmdata_attributes_params.hubp;
+ struct dc_dmdata_attributes *attr = ¶ms->set_dmdata_attributes_params.attr;
+
+ if (hubp->funcs->dmdata_set_attributes)
+ hubp->funcs->dmdata_set_attributes(hubp, attr);
+}
+
+void hwss_tg_setup_vertical_interrupt0(union block_sequence_params *params)
+{
+ struct timing_generator *tg = params->tg_setup_vertical_interrupt0_params.tg;
+ uint32_t start_line = params->tg_setup_vertical_interrupt0_params.start_line;
+ uint32_t end_line = params->tg_setup_vertical_interrupt0_params.end_line;
+
+ if (tg->funcs->setup_vertical_interrupt0)
+ tg->funcs->setup_vertical_interrupt0(tg, start_line, end_line);
+}
+
void hwss_tg_setup_vertical_interrupt2(union block_sequence_params *params)
{
struct timing_generator *tg = params->tg_setup_vertical_interrupt2_params.tg;
@@ -3113,6 +3489,51 @@ void hwss_abort_cursor_offload_update(union block_sequence_params *params)
dc->hwss.abort_cursor_offload_update(dc, pipe_ctx);
}
+void hwss_cursor_lock(union block_sequence_params *params)
+{
+ struct dc *dc = params->cursor_lock_params.dc;
+ struct pipe_ctx *pipe_ctx = params->cursor_lock_params.pipe_ctx;
+ bool lock = params->cursor_lock_params.lock;
+
+ if (dc && dc->hwss.cursor_lock)
+ dc->hwss.cursor_lock(dc, pipe_ctx, lock);
+}
+
+void hwss_begin_cursor_offload_update(union block_sequence_params *params)
+{
+ struct dc *dc = params->begin_cursor_offload_update_params.dc;
+ struct pipe_ctx *pipe_ctx = params->begin_cursor_offload_update_params.pipe_ctx;
+
+ if (dc && dc->hwss.begin_cursor_offload_update)
+ dc->hwss.begin_cursor_offload_update(dc, pipe_ctx);
+}
+
+void hwss_commit_cursor_offload_update(union block_sequence_params *params)
+{
+ struct dc *dc = params->commit_cursor_offload_update_params.dc;
+ struct pipe_ctx *pipe_ctx = params->commit_cursor_offload_update_params.pipe_ctx;
+
+ if (dc && dc->hwss.commit_cursor_offload_update)
+ dc->hwss.commit_cursor_offload_update(dc, pipe_ctx);
+}
+
+void hwss_update_cursor_offload_pipe(union block_sequence_params *params)
+{
+ struct dc *dc = params->update_cursor_offload_pipe_params.dc;
+ struct pipe_ctx *pipe_ctx = params->update_cursor_offload_pipe_params.pipe_ctx;
+
+ if (dc && dc->hwss.update_cursor_offload_pipe)
+ dc->hwss.update_cursor_offload_pipe(dc, pipe_ctx);
+}
+
+void hwss_send_cursor_info_to_dmu(union block_sequence_params *params)
+{
+ struct pipe_ctx *pipe_ctx = params->send_cursor_info_to_dmu_params.pipe_ctx;
+ int pipe_idx = params->send_cursor_info_to_dmu_params.pipe_idx;
+
+ dc_send_update_cursor_info_to_dmu(pipe_ctx, pipe_idx);
+}
+
void hwss_set_cursor_attribute(union block_sequence_params *params)
{
struct dc *dc = params->set_cursor_attribute_params.dc;
@@ -3122,6 +3543,24 @@ void hwss_set_cursor_attribute(union block_sequence_params *params)
dc->hwss.set_cursor_attribute(pipe_ctx);
}
+void hwss_hubp_set_cursor_attributes(union block_sequence_params *params)
+{
+ struct hubp *hubp = params->hubp_set_cursor_attributes_params.hubp;
+ const struct dc_cursor_attributes *attributes = params->hubp_set_cursor_attributes_params.attributes;
+
+ if (hubp && hubp->funcs->set_cursor_attributes)
+ hubp->funcs->set_cursor_attributes(hubp, attributes);
+}
+
+void hwss_dpp_set_cursor_attributes(union block_sequence_params *params)
+{
+ struct dpp *dpp = params->dpp_set_cursor_attributes_params.dpp;
+ struct dc_cursor_attributes *attributes = params->dpp_set_cursor_attributes_params.attributes;
+
+ if (dpp && dpp->funcs->set_cursor_attributes)
+ dpp->funcs->set_cursor_attributes(dpp, attributes);
+}
+
void hwss_set_cursor_position(union block_sequence_params *params)
{
struct dc *dc = params->set_cursor_position_params.dc;
@@ -3140,6 +3579,14 @@ void hwss_set_cursor_sdr_white_level(union block_sequence_params *params)
dc->hwss.set_cursor_sdr_white_level(pipe_ctx);
}
+void hwss_program_gamut_remap(union block_sequence_params *params)
+{
+ struct dc *dc = params->program_gamut_remap_params.pipe_ctx->stream->ctx->dc;
+
+ if (dc && dc->hwss.program_gamut_remap)
+ dc->hwss.program_gamut_remap(params->program_gamut_remap_params.pipe_ctx);
+}
+
void hwss_program_output_csc(union block_sequence_params *params)
{
struct dc *dc = params->program_output_csc_params.dc;
@@ -4009,6 +4456,30 @@ void hwss_add_set_cursor_attribute(struct block_sequence_state *seq_state,
}
}
+void hwss_add_hubp_set_cursor_attributes(struct block_sequence_state *seq_state,
+ struct hubp *hubp,
+ const struct dc_cursor_attributes *attributes)
+{
+ if (*seq_state->num_steps < MAX_HWSS_BLOCK_SEQUENCE_SIZE) {
+ seq_state->steps[*seq_state->num_steps].func = HUBP_SET_CURSOR_ATTRIBUTES;
+ seq_state->steps[*seq_state->num_steps].params.hubp_set_cursor_attributes_params.hubp = hubp;
+ seq_state->steps[*seq_state->num_steps].params.hubp_set_cursor_attributes_params.attributes = attributes;
+ (*seq_state->num_steps)++;
+ }
+}
+
+void hwss_add_dpp_set_cursor_attributes(struct block_sequence_state *seq_state,
+ struct dpp *dpp,
+ struct dc_cursor_attributes *attributes)
+{
+ if (*seq_state->num_steps < MAX_HWSS_BLOCK_SEQUENCE_SIZE) {
+ seq_state->steps[*seq_state->num_steps].func = DPP_SET_CURSOR_ATTRIBUTES;
+ seq_state->steps[*seq_state->num_steps].params.dpp_set_cursor_attributes_params.dpp = dpp;
+ seq_state->steps[*seq_state->num_steps].params.dpp_set_cursor_attributes_params.attributes = attributes;
+ (*seq_state->num_steps)++;
+ }
+}
+
void hwss_add_set_cursor_position(struct block_sequence_state *seq_state,
struct dc *dc,
struct pipe_ctx *pipe_ctx)
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 7f55ba09b191..1b10b9770982 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -1485,6 +1485,7 @@ union surface_update_flags {
uint32_t pixel_format_change:1;
uint32_t plane_size_change:1;
uint32_t gamut_remap_change:1;
+ uint32_t cursor_csc_color_matrix_change:1;
/* Full updates */
uint32_t new_plane:1;
@@ -1894,6 +1895,20 @@ struct dc_fast_update {
#if defined(CONFIG_DRM_AMD_DC_DCN4_2)
struct cm_hist_control *cm_hist_control;
#endif
+ /* stream-level fast updates */
+ const struct colorspace_transform *gamut_remap;
+ const struct dc_cursor_attributes *cursor_attributes;
+ const struct dc_cursor_position *cursor_position;
+ const struct periodic_interrupt_config *periodic_interrupt;
+ const enum dc_dither_option *dither_option;
+ struct dc_info_packet *vrr_infopacket;
+ struct dc_info_packet *vsc_infopacket;
+ struct dc_info_packet *vsp_infopacket;
+ struct dc_info_packet *hfvsif_infopacket;
+ struct dc_info_packet *vtem_infopacket;
+ struct dc_info_packet *adaptive_sync_infopacket;
+ struct dc_info_packet *avi_infopacket;
+ struct dc_info_packet *hdr_static_metadata;
};
struct dc_surface_update {
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
index 7c38fa6f8cb1..88f70a9b64b1 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
@@ -64,35 +64,6 @@ struct dc_stream_status {
bool fpo_in_use;
};
-enum hubp_dmdata_mode {
- DMDATA_SW_MODE,
- DMDATA_HW_MODE
-};
-
-struct dc_dmdata_attributes {
- /* Specifies whether dynamic meta data will be updated by software
- * or has to be fetched by hardware (DMA mode)
- */
- enum hubp_dmdata_mode dmdata_mode;
- /* Specifies if current dynamic meta data is to be used only for the current frame */
- bool dmdata_repeat;
- /* Specifies the size of Dynamic Metadata surface in byte. Size of 0 means no Dynamic metadata is fetched */
- uint32_t dmdata_size;
- /* Specifies if a new dynamic meta data should be fetched for an upcoming frame */
- bool dmdata_updated;
- /* If hardware mode is used, the base address where DMDATA surface is located */
- PHYSICAL_ADDRESS_LOC address;
- /* Specifies whether QOS level will be provided by TTU or it will come from DMDATA_QOS_LEVEL */
- bool dmdata_qos_mode;
- /* If qos_mode = 1, this is the QOS value to be used: */
- uint32_t dmdata_qos_level;
- /* Specifies the value in unit of REFCLK cycles to be added to the
- * current time to produce the Amortized deadline for Dynamic Metadata chunk request
- */
- uint32_t dmdata_dl_delta;
- /* An unbounded array of uint32s, represents software dmdata to be loaded */
- uint32_t *dmdata_sw_data;
-};
struct dc_writeback_info {
bool wb_enabled;
@@ -146,6 +117,12 @@ union stream_update_flags {
uint32_t fams_changed : 1;
uint32_t scaler_sharpener : 1;
uint32_t sharpening_required : 1;
+ uint32_t cursor_attr : 1;
+ uint32_t cursor_pos : 1;
+ uint32_t periodic_interrupt : 1;
+ uint32_t info_frame : 1;
+ uint32_t dmdata : 1;
+ uint32_t dither : 1;
} bits;
uint32_t raw;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
index c08d5c005df6..476db257d4ee 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
@@ -790,6 +790,36 @@ struct dc_clock_config {
uint32_t current_clock_khz;/*current clock in use*/
};
+enum hubp_dmdata_mode {
+ DMDATA_SW_MODE,
+ DMDATA_HW_MODE
+};
+
+struct dc_dmdata_attributes {
+ /* Specifies whether dynamic meta data will be updated by software
+ * or has to be fetched by hardware (DMA mode)
+ */
+ enum hubp_dmdata_mode dmdata_mode;
+ /* Specifies if current dynamic meta data is to be used only for the current frame */
+ bool dmdata_repeat;
+ /* Specifies the size of Dynamic Metadata surface in byte. Size of 0 means no Dynamic metadata is fetched */
+ uint32_t dmdata_size;
+ /* Specifies if a new dynamic meta data should be fetched for an upcoming frame */
+ bool dmdata_updated;
+ /* If hardware mode is used, the base address where DMDATA surface is located */
+ PHYSICAL_ADDRESS_LOC address;
+ /* Specifies whether QOS level will be provided by TTU or it will come from DMDATA_QOS_LEVEL */
+ bool dmdata_qos_mode;
+ /* If qos_mode = 1, this is the QOS value to be used: */
+ uint32_t dmdata_qos_level;
+ /* Specifies the value in unit of REFCLK cycles to be added to the
+ * current time to produce the Amortized deadline for Dynamic Metadata chunk request
+ */
+ uint32_t dmdata_dl_delta;
+ /* An unbounded array of uint32s, represents software dmdata to be loaded */
+ uint32_t *dmdata_sw_data;
+};
+
struct hw_asic_id {
uint32_t chip_id;
uint32_t chip_family;
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
index d1dba7ffcd9b..1cb2be00bf72 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
+++ b/drivers/gpu/drm/amd/display/dc/hwss/hw_sequencer.h
@@ -91,6 +91,37 @@ struct program_gamut_remap_params {
struct pipe_ctx *pipe_ctx;
};
+struct hubp_enable_3dlut_fl_params {
+ struct hubp *hubp;
+};
+
+struct tg_setup_vertical_interrupt0_params {
+ struct timing_generator *tg;
+ uint32_t start_line;
+ uint32_t end_line;
+};
+
+struct update_info_frame_params {
+ struct dc *dc;
+ struct pipe_ctx *pipe_ctx;
+};
+
+struct setup_periodic_interrupt_params {
+ struct dc *dc;
+ struct pipe_ctx *pipe_ctx;
+};
+
+struct dp_trace_source_sequence_params {
+ struct dc *dc;
+ struct dc_link *link;
+ uint8_t dp_test_mode;
+};
+
+struct set_dmdata_attributes_params {
+ struct hubp *hubp;
+ struct dc_dmdata_attributes attr;
+};
+
struct program_manual_trigger_params {
struct pipe_ctx *pipe_ctx;
};
@@ -707,11 +738,47 @@ struct abort_cursor_offload_update_params {
struct pipe_ctx *pipe_ctx;
};
+struct cursor_lock_params {
+ struct dc *dc;
+ struct pipe_ctx *pipe_ctx;
+ bool lock;
+};
+
+struct begin_cursor_offload_update_params {
+ struct dc *dc;
+ struct pipe_ctx *pipe_ctx;
+};
+
+struct commit_cursor_offload_update_params {
+ struct dc *dc;
+ struct pipe_ctx *pipe_ctx;
+};
+
+struct update_cursor_offload_pipe_params {
+ struct dc *dc;
+ struct pipe_ctx *pipe_ctx;
+};
+
+struct send_cursor_info_to_dmu_params {
+ struct pipe_ctx *pipe_ctx;
+ int pipe_idx;
+};
+
struct set_cursor_attribute_params {
struct dc *dc;
struct pipe_ctx *pipe_ctx;
};
+struct hubp_set_cursor_attributes_params {
+ struct hubp *hubp;
+ const struct dc_cursor_attributes *attributes;
+};
+
+struct dpp_set_cursor_attributes_params {
+ struct dpp *dpp;
+ struct dc_cursor_attributes *attributes;
+};
+
struct set_cursor_position_params {
struct dc *dc;
struct pipe_ctx *pipe_ctx;
@@ -747,6 +814,12 @@ union block_sequence_params {
struct program_triplebuffer_params program_triplebuffer_params;
struct set_input_transfer_func_params set_input_transfer_func_params;
struct program_gamut_remap_params program_gamut_remap_params;
+ struct hubp_enable_3dlut_fl_params hubp_enable_3dlut_fl_params;
+ struct tg_setup_vertical_interrupt0_params tg_setup_vertical_interrupt0_params;
+ struct update_info_frame_params update_info_frame_params;
+ struct setup_periodic_interrupt_params setup_periodic_interrupt_params;
+ struct dp_trace_source_sequence_params dp_trace_source_sequence_params;
+ struct set_dmdata_attributes_params set_dmdata_attributes_params;
struct program_manual_trigger_params program_manual_trigger_params;
struct send_dmcub_cmd_params send_dmcub_cmd_params;
struct setup_dpp_params setup_dpp_params;
@@ -855,7 +928,14 @@ union block_sequence_params {
struct dpp_set_scaler_params dpp_set_scaler_params;
struct hubp_mem_program_viewport_params hubp_mem_program_viewport_params;
struct abort_cursor_offload_update_params abort_cursor_offload_update_params;
+ struct cursor_lock_params cursor_lock_params;
+ struct begin_cursor_offload_update_params begin_cursor_offload_update_params;
+ struct commit_cursor_offload_update_params commit_cursor_offload_update_params;
+ struct update_cursor_offload_pipe_params update_cursor_offload_pipe_params;
+ struct send_cursor_info_to_dmu_params send_cursor_info_to_dmu_params;
struct set_cursor_attribute_params set_cursor_attribute_params;
+ struct hubp_set_cursor_attributes_params hubp_set_cursor_attributes_params;
+ struct dpp_set_cursor_attributes_params dpp_set_cursor_attributes_params;
struct set_cursor_position_params set_cursor_position_params;
struct set_cursor_sdr_white_level_params set_cursor_sdr_white_level_params;
struct program_output_csc_params program_output_csc_params;
@@ -871,6 +951,12 @@ enum block_sequence_func {
HUBP_UPDATE_PLANE_ADDR,
DPP_SET_INPUT_TRANSFER_FUNC,
DPP_PROGRAM_GAMUT_REMAP,
+ HUBP_ENABLE_3DLUT_FL,
+ OTG_SETUP_VERTICAL_INTERRUPT,
+ HWSS_SETUP_PERIODIC_INTERRUPT,
+ HWSS_UPDATE_INFO_FRAME,
+ DP_TRACE_SOURCE_SEQUENCE,
+ HUBP_SET_DMDATA_ATTRIBUTES,
OPTC_PROGRAM_MANUAL_TRIGGER,
DMUB_SEND_DMCUB_CMD,
DPP_SETUP_DPP,
@@ -975,7 +1061,14 @@ enum block_sequence_func {
DPP_SET_SCALER,
HUBP_MEM_PROGRAM_VIEWPORT,
ABORT_CURSOR_OFFLOAD_UPDATE,
+ HWSS_CURSOR_LOCK,
+ HWSS_BEGIN_CURSOR_OFFLOAD_UPDATE,
+ HWSS_COMMIT_CURSOR_OFFLOAD_UPDATE,
+ HWSS_UPDATE_CURSOR_OFFLOAD_PIPE,
+ DC_SEND_CURSOR_INFO_TO_DMU,
SET_CURSOR_ATTRIBUTE,
+ HUBP_SET_CURSOR_ATTRIBUTES,
+ DPP_SET_CURSOR_ATTRIBUTES,
SET_CURSOR_POSITION,
SET_CURSOR_SDR_WHITE_LEVEL,
PROGRAM_OUTPUT_CSC,
@@ -1463,6 +1556,18 @@ void hwss_tg_wait_for_state(union block_sequence_params *params);
void hwss_tg_set_vtg_params(union block_sequence_params *params);
+void hwss_hubp_enable_3dlut_fl(union block_sequence_params *params);
+
+void hwss_update_info_frame(struct dc *dc, union block_sequence_params *params);
+
+void hwss_setup_periodic_interrupt(struct dc *dc, union block_sequence_params *params);
+
+void hwss_dp_trace_source_sequence(union block_sequence_params *params);
+
+void hwss_set_dmdata_attributes(union block_sequence_params *params);
+
+void hwss_tg_setup_vertical_interrupt0(union block_sequence_params *params);
+
void hwss_tg_setup_vertical_interrupt2(union block_sequence_params *params);
void hwss_dpp_set_hdr_multiplier(union block_sequence_params *params);
@@ -1603,12 +1708,28 @@ void hwss_hubp_mem_program_viewport(union block_sequence_params *params);
void hwss_abort_cursor_offload_update(union block_sequence_params *params);
+void hwss_cursor_lock(union block_sequence_params *params);
+
+void hwss_begin_cursor_offload_update(union block_sequence_params *params);
+
+void hwss_commit_cursor_offload_update(union block_sequence_params *params);
+
+void hwss_update_cursor_offload_pipe(union block_sequence_params *params);
+
+void hwss_send_cursor_info_to_dmu(union block_sequence_params *params);
+
void hwss_set_cursor_attribute(union block_sequence_params *params);
+void hwss_hubp_set_cursor_attributes(union block_sequence_params *params);
+
+void hwss_dpp_set_cursor_attributes(union block_sequence_params *params);
+
void hwss_set_cursor_position(union block_sequence_params *params);
void hwss_set_cursor_sdr_white_level(union block_sequence_params *params);
+void hwss_program_gamut_remap(union block_sequence_params *params);
+
void hwss_program_output_csc(union block_sequence_params *params);
void hwss_hubp_set_legacy_tiling_compat_level(union block_sequence_params *params);
@@ -1695,6 +1816,9 @@ void hwss_add_tg_wait_for_state(struct block_sequence_state *seq_state,
void hwss_add_tg_set_vtg_params(struct block_sequence_state *seq_state,
struct timing_generator *tg, struct dc_crtc_timing *dc_crtc_timing, bool program_fp2);
+void hwss_add_vertical_interrupt_setup(struct block_sequence_state *seq_state,
+ struct timing_generator *tg, uint32_t start_line, uint32_t end_line);
+
void hwss_add_tg_setup_vertical_interrupt2(struct block_sequence_state *seq_state,
struct timing_generator *tg, int start_line);
@@ -2012,6 +2136,14 @@ void hwss_add_set_cursor_attribute(struct block_sequence_state *seq_state,
struct dc *dc,
struct pipe_ctx *pipe_ctx);
+void hwss_add_hubp_set_cursor_attributes(struct block_sequence_state *seq_state,
+ struct hubp *hubp,
+ const struct dc_cursor_attributes *attributes);
+
+void hwss_add_dpp_set_cursor_attributes(struct block_sequence_state *seq_state,
+ struct dpp *dpp,
+ struct dc_cursor_attributes *attributes);
+
void hwss_add_set_cursor_position(struct block_sequence_state *seq_state,
struct dc *dc,
struct pipe_ctx *pipe_ctx);
@@ -2056,4 +2188,10 @@ void hwss_add_opp_program_left_edge_extra_pixel(struct block_sequence_state *seq
enum dc_pixel_encoding pixel_encoding,
bool is_otg_master);
+void hwss_add_hubp_enable_3dlut_fl(struct block_sequence_state *seq_state,
+ struct hubp *hubp);
+
+void hwss_add_set_dmdata_attributes(struct block_sequence_state *seq_state,
+ struct hubp *hubp, struct dc_dmdata_attributes *attr);
+
#endif /* __DC_HW_SEQUENCER_H__ */
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 06/19] drm/amd/display: Fix implicit narrowing conversion warnings
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (4 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 05/19] drm/amd/display: Align HWSS fast commit path with legacy path Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 07/19] drm/amd/display: Fix double free Chenyu Chen
` (13 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Gaghik Khachatrian, Dillon Varone,
Chenyu Chen
From: Gaghik Khachatrian <gaghik.khachatrian@amd.com>
[Why]
Multiple display source files contain implicit narrowing
conversions when assigning wider integer types (int, uint32_t)
to narrower fields (uint8_t, uint16_t) at hardware register,
protocol, and storage boundaries. These conversions are
intentional but undocumented, and accompanying runtime assertions
add noise without providing compile-time safety.
[How]
Add explicit casts at all intentional narrowing boundaries across
display source files. Use narrower loop variable types where loop
bounds guarantee safe range. Remove runtime assertions paired
with narrowing casts, inline single-use intermediate variables,
and revert block scopes and braces introduced solely to contain
those assertions.
Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Gaghik Khachatrian <gaghik.khachatrian@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../drm/amd/display/dc/basics/custom_float.c | 2 +-
.../gpu/drm/amd/display/dc/basics/dce_calcs.c | 2 +-
.../gpu/drm/amd/display/dc/bios/bios_parser.c | 6 +-
.../drm/amd/display/dc/bios/bios_parser2.c | 20 +--
.../drm/amd/display/dc/bios/command_table.c | 12 +-
.../drm/amd/display/dc/bios/command_table2.c | 4 +-
.../dc/clk_mgr/dce110/dce110_clk_mgr.c | 6 +-
.../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 9 +-
.../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c | 39 +++---
.../display/dc/clk_mgr/dcn301/vg_clk_mgr.c | 11 +-
.../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c | 12 +-
.../dc/clk_mgr/dcn314/dcn314_clk_mgr.c | 12 +-
.../dc/clk_mgr/dcn315/dcn315_clk_mgr.c | 13 +-
.../dc/clk_mgr/dcn316/dcn316_clk_mgr.c | 13 +-
.../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 43 +++---
.../display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c | 14 +-
.../dc/clk_mgr/dcn401/dcn401_clk_mgr.c | 40 +++---
.../display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c | 10 +-
drivers/gpu/drm/amd/display/dc/core/dc.c | 68 ++++-----
.../drm/amd/display/dc/core/dc_hw_sequencer.c | 90 ++++++------
.../gpu/drm/amd/display/dc/core/dc_resource.c | 52 +++----
.../gpu/drm/amd/display/dc/core/dc_stream.c | 20 ++-
.../gpu/drm/amd/display/dc/core/dc_surface.c | 2 +-
drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 130 +++++++++---------
drivers/gpu/drm/amd/display/dc/dc_fused_io.c | 6 +-
drivers/gpu/drm/amd/display/dc/dc_helper.c | 12 +-
.../amd/display/dc/dccg/dcn31/dcn31_dccg.c | 4 +-
.../amd/display/dc/dccg/dcn401/dcn401_dccg.c | 20 +--
drivers/gpu/drm/amd/display/dc/dce/dce_aux.c | 4 +-
.../drm/amd/display/dc/dce/dce_clock_source.c | 24 ++--
.../gpu/drm/amd/display/dc/dce/dce_i2c_hw.c | 2 +-
.../drm/amd/display/dc/dce/dce_panel_cntl.c | 4 +-
.../drm/amd/display/dc/dce/dce_transform.c | 8 +-
.../gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c | 14 +-
drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c | 12 +-
.../gpu/drm/amd/display/dc/dce/dmub_replay.c | 23 ++--
.../display/dc/dce80/dce80_timing_generator.c | 2 +-
.../amd/display/dc/dcn10/dcn10_cm_common.c | 4 +-
.../drm/amd/display/dc/dcn30/dcn30_mmhubbub.c | 16 +--
.../dc/dio/dcn401/dcn401_dio_stream_encoder.c | 2 +-
.../dc/dio/dcn42/dcn42_dio_stream_encoder.c | 4 +-
.../drm/amd/display/dc/dml/calcs/dcn_calcs.c | 5 +-
.../drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 7 +-
.../drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 2 +-
.../drm/amd/display/dc/dml/dcn32/dcn32_fpu.c | 37 ++---
.../drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c | 12 +-
.../gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c | 30 ++--
drivers/gpu/drm/amd/display/dc/gpio/hw_ddc.c | 2 +-
.../gpu/drm/amd/display/dc/gpio/hw_generic.c | 2 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c | 2 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.h | 9 ++
drivers/gpu/drm/amd/display/dc/gpio/hw_hpd.c | 2 +-
.../display/dc/hubbub/dcn10/dcn10_hubbub.c | 16 +--
.../display/dc/hubbub/dcn20/dcn20_hubbub.c | 28 ++--
.../display/dc/hubbub/dcn20/dcn20_hubbub.h | 3 +
.../display/dc/hubbub/dcn21/dcn21_hubbub.c | 12 +-
.../display/dc/hubbub/dcn30/dcn30_hubbub.c | 12 +-
.../display/dc/hubbub/dcn31/dcn31_hubbub.c | 12 +-
.../amd/display/dc/hubp/dcn20/dcn20_hubp.c | 4 +-
.../amd/display/dc/hubp/dcn21/dcn21_hubp.c | 4 +-
.../amd/display/dc/hubp/dcn30/dcn30_hubp.c | 4 +-
.../amd/display/dc/hwss/dce110/dce110_hwseq.c | 29 ++--
.../amd/display/dc/hwss/dce120/dce120_hwseq.c | 12 +-
.../amd/display/dc/hwss/dcn10/dcn10_hwseq.c | 12 +-
.../amd/display/dc/hwss/dcn20/dcn20_hwseq.c | 13 +-
.../amd/display/dc/hwss/dcn21/dcn21_hwseq.c | 8 +-
.../amd/display/dc/hwss/dcn30/dcn30_hwseq.c | 16 +--
.../amd/display/dc/hwss/dcn314/dcn314_hwseq.c | 4 +-
.../amd/display/dc/hwss/dcn32/dcn32_hwseq.c | 4 +-
.../amd/display/dc/hwss/dcn35/dcn35_hwseq.c | 6 +-
.../amd/display/dc/hwss/dcn401/dcn401_hwseq.c | 18 ++-
drivers/gpu/drm/amd/display/dc/inc/bw_fixed.h | 2 +-
.../display/dc/link/accessories/link_dp_cts.c | 37 +----
.../display/dc/link/hwss/link_hwss_hpo_dp.c | 4 +-
.../drm/amd/display/dc/link/link_detection.c | 4 +-
.../gpu/drm/amd/display/dc/link/link_dpms.c | 6 +-
.../drm/amd/display/dc/link/link_factory.c | 6 +-
.../amd/display/dc/link/protocols/link_ddc.c | 5 +-
.../dc/link/protocols/link_dp_capability.c | 2 +-
.../dc/link/protocols/link_dp_dpia_bw.c | 4 +-
.../dc/link/protocols/link_dp_panel_replay.c | 23 ++--
.../link/protocols/link_dp_training_8b_10b.c | 10 +-
.../link/protocols/link_edp_panel_control.c | 38 ++---
.../dc/mmhubbub/dcn20/dcn20_mmhubbub.c | 4 +-
.../dc/mmhubbub/dcn32/dcn32_mmhubbub.c | 4 +-
.../amd/display/dc/optc/dcn20/dcn20_optc.c | 4 +-
.../dc/resource/dce110/dce110_resource.c | 4 +-
.../dc/resource/dcn10/dcn10_resource.c | 4 +-
.../dc/resource/dcn20/dcn20_resource.c | 24 ++--
.../dc/resource/dcn21/dcn21_resource.c | 2 +-
.../dc/resource/dcn30/dcn30_resource.c | 14 +-
.../dc/resource/dcn301/dcn301_resource.c | 8 +-
.../dc/resource/dcn302/dcn302_resource.c | 4 +-
.../dc/resource/dcn303/dcn303_resource.c | 4 +-
.../dc/resource/dcn31/dcn31_resource.c | 4 +-
.../dc/resource/dcn314/dcn314_resource.c | 4 +-
.../dc/resource/dcn315/dcn315_resource.c | 4 +-
.../dc/resource/dcn316/dcn316_resource.c | 4 +-
.../dc/resource/dcn32/dcn32_resource.c | 20 +--
.../dc/resource/dcn321/dcn321_resource.c | 4 +-
.../dc/resource/dcn35/dcn35_resource.c | 4 +-
.../dc/resource/dcn351/dcn351_resource.c | 4 +-
.../dc/resource/dcn36/dcn36_resource.c | 4 +-
.../dc/resource/dcn401/dcn401_resource.c | 4 +-
.../dc/resource/dcn42/dcn42_resource.c | 4 +-
.../dcn401/dcn401_soc_and_ip_translator.c | 28 ++--
.../dcn42/dcn42_soc_and_ip_translator.c | 14 +-
107 files changed, 733 insertions(+), 704 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/basics/custom_float.c b/drivers/gpu/drm/amd/display/dc/basics/custom_float.c
index ae05ded9a7f3..d313584335d3 100644
--- a/drivers/gpu/drm/amd/display/dc/basics/custom_float.c
+++ b/drivers/gpu/drm/amd/display/dc/basics/custom_float.c
@@ -90,7 +90,7 @@ static bool build_custom_float(struct fixed31_32 value,
dc_fixpt_lt(dc_fixpt_one, mantiss))
mantiss = dc_fixpt_zero;
else
- mantiss = dc_fixpt_shl(mantiss, format->mantissa_bits);
+ mantiss = dc_fixpt_shl(mantiss, (unsigned char)format->mantissa_bits);
*mantissa = dc_fixpt_floor(mantiss);
diff --git a/drivers/gpu/drm/amd/display/dc/basics/dce_calcs.c b/drivers/gpu/drm/amd/display/dc/basics/dce_calcs.c
index 070195c5393e..fd19b6470628 100644
--- a/drivers/gpu/drm/amd/display/dc/basics/dce_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/basics/dce_calcs.c
@@ -3078,7 +3078,7 @@ bool bw_calcs(struct dc_context *ctx,
}
calculate_bandwidth(dceip, vbios, data);
- yclk_lvl = data->y_clk_level;
+ yclk_lvl = (uint8_t)data->y_clk_level;
calcs_output->nbp_state_change_enable =
data->nbp_state_change_enable;
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
index 7fcba9f3c5af..25c94962e141 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
@@ -803,8 +803,8 @@ static enum bp_result bios_parser_dac_load_detection(
uint32_t bios_0_scratch;
uint32_t device_id_mask = 0;
- bp_params.device_id = get_support_mask_for_device_id(
- DEVICE_TYPE_CRT, engine_id == ENGINE_ID_DACB ? 2 : 1);
+ bp_params.device_id = (uint16_t)get_support_mask_for_device_id(
+ DEVICE_TYPE_CRT, engine_id == ENGINE_ID_DACB ? 2 : 1);
if (bp_params.device_id == ATOM_DEVICE_CRT1_SUPPORT)
device_id_mask = ATOM_S0_CRT1_MASK;
@@ -1382,7 +1382,7 @@ static enum bp_result get_embedded_panel_info_v1_2(
info->ss_id = lvds->ucSS_Id;
{
- uint8_t rr = le16_to_cpu(lvds->usSupportedRefreshRate);
+ uint16_t rr = le16_to_cpu(lvds->usSupportedRefreshRate);
/* Get minimum supported refresh rate*/
if (SUPPORTED_LCD_REFRESHRATE_30Hz & rr)
info->supported_rr.REFRESH_RATE_30HZ = 1;
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index d2fb9c0162e5..dd45cc170fc7 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -157,7 +157,7 @@ static uint8_t bios_parser_get_connectors_number(struct dc_bios *dcb)
break;
}
- return count;
+ return (uint8_t)count;
}
static struct graphics_object_id bios_parser_get_connector_id(
@@ -401,7 +401,7 @@ static enum bp_result bios_parser_get_i2c_info(struct dc_bios *dcb,
return BP_RESULT_BADINPUT;
if (id.type == OBJECT_TYPE_GENERIC) {
- dummy_record.i2c_id = id.id;
+ dummy_record.i2c_id = (uint8_t)id.id;
if (get_gpio_i2c_info(bp, &dummy_record, info) == BP_RESULT_OK)
return BP_RESULT_OK;
@@ -1228,7 +1228,7 @@ static enum bp_result get_disp_caps_v4_1(
if (!disp_cntl_tbl)
return BP_RESULT_BADBIOSTABLE;
- *dce_caps = disp_cntl_tbl->display_caps;
+ *dce_caps = (uint8_t)disp_cntl_tbl->display_caps;
return result;
}
@@ -1252,7 +1252,7 @@ static enum bp_result get_disp_caps_v4_2(
if (!disp_cntl_tbl)
return BP_RESULT_BADBIOSTABLE;
- *dce_caps = disp_cntl_tbl->display_caps;
+ *dce_caps = (uint8_t)disp_cntl_tbl->display_caps;
return result;
}
@@ -1276,7 +1276,7 @@ static enum bp_result get_disp_caps_v4_3(
if (!disp_cntl_tbl)
return BP_RESULT_BADBIOSTABLE;
- *dce_caps = disp_cntl_tbl->display_caps;
+ *dce_caps = (uint8_t)disp_cntl_tbl->display_caps;
return result;
}
@@ -1300,7 +1300,7 @@ static enum bp_result get_disp_caps_v4_4(
if (!disp_cntl_tbl)
return BP_RESULT_BADBIOSTABLE;
- *dce_caps = disp_cntl_tbl->display_caps;
+ *dce_caps = (uint8_t)disp_cntl_tbl->display_caps;
return result;
}
@@ -1324,7 +1324,7 @@ static enum bp_result get_disp_caps_v4_5(
if (!disp_cntl_tbl)
return BP_RESULT_BADBIOSTABLE;
- *dce_caps = disp_cntl_tbl->display_caps;
+ *dce_caps = (uint8_t)disp_cntl_tbl->display_caps;
return result;
}
@@ -2585,7 +2585,7 @@ static enum bp_result get_integrated_info_v11(
info->ext_disp_conn_info.path[i].channel_mapping.raw =
info_v11->extdispconninfo.path[i].channelmapping;
info->ext_disp_conn_info.path[i].caps =
- le16_to_cpu(info_v11->extdispconninfo.path[i].caps);
+ (unsigned short)le16_to_cpu(info_v11->extdispconninfo.path[i].caps);
}
info->ext_disp_conn_info.checksum =
info_v11->extdispconninfo.checksum;
@@ -2790,7 +2790,7 @@ static enum bp_result get_integrated_info_v2_1(
info->ext_disp_conn_info.path[i].channel_mapping.raw =
info_v2_1->extdispconninfo.path[i].channelmapping;
info->ext_disp_conn_info.path[i].caps =
- le16_to_cpu(info_v2_1->extdispconninfo.path[i].caps);
+ (unsigned short)le16_to_cpu(info_v2_1->extdispconninfo.path[i].caps);
}
info->ext_disp_conn_info.checksum =
@@ -2954,7 +2954,7 @@ static enum bp_result get_integrated_info_v2_2(
info->ext_disp_conn_info.path[i].channel_mapping.raw =
info_v2_2->extdispconninfo.path[i].channelmapping;
info->ext_disp_conn_info.path[i].caps =
- le16_to_cpu(info_v2_2->extdispconninfo.path[i].caps);
+ (unsigned short)le16_to_cpu(info_v2_2->extdispconninfo.path[i].caps);
}
info->ext_disp_conn_info.checksum =
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
index f6e22dcecf29..0df84394a325 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c
@@ -1521,8 +1521,8 @@ static enum bp_result adjust_display_pll_v2(
if (pixel_clock_10KHz_in != 0) {
bp_params->adjusted_pixel_clock =
- div_u64(pixel_clk * pixel_clk_10_khz_out,
- pixel_clock_10KHz_in);
+ (uint32_t)div_u64(pixel_clk * pixel_clk_10_khz_out,
+ pixel_clock_10KHz_in);
} else {
bp_params->adjusted_pixel_clock = 0;
BREAK_TO_DEBUGGER();
@@ -1571,8 +1571,8 @@ static enum bp_result adjust_display_pll_v3(
if (pixel_clk_10_kHz_in != 0) {
bp_params->adjusted_pixel_clock =
- div_u64(pixel_clk * pixel_clk_10_khz_out,
- pixel_clk_10_kHz_in);
+ (uint32_t)div_u64(pixel_clk * pixel_clk_10_khz_out,
+ pixel_clk_10_kHz_in);
} else {
bp_params->adjusted_pixel_clock = 0;
BREAK_TO_DEBUGGER();
@@ -2662,8 +2662,8 @@ static enum bp_result set_dce_clock_v2_1(
!cmd->dc_clock_type_to_atom(bp_params->clock_type, &atom_clock_type))
return BP_RESULT_BADINPUT;
- params.asParam.ucDCEClkSrc = atom_pll_id;
- params.asParam.ucDCEClkType = atom_clock_type;
+ params.asParam.ucDCEClkSrc = (uint8_t)atom_pll_id;
+ params.asParam.ucDCEClkType = (uint8_t)atom_clock_type;
if (bp_params->clock_type == DCECLOCK_TYPE_DPREFCLK) {
if (bp_params->flags.USE_GENLOCK_AS_SOURCE_FOR_DPREFCLK)
diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
index 17ef515c6c69..88625daf5378 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/command_table2.c
@@ -929,8 +929,8 @@ static enum bp_result set_dce_clock_v2_1(
&atom_clock_type))
return BP_RESULT_BADINPUT;
- params.param.dceclksrc = atom_pll_id;
- params.param.dceclktype = atom_clock_type;
+ params.param.dceclksrc = (uint8_t)atom_pll_id;
+ params.param.dceclktype = (uint8_t)atom_clock_type;
if (bp_params->clock_type == DCECLOCK_TYPE_DPREFCLK) {
if (bp_params->flags.USE_GENLOCK_AS_SOURCE_FOR_DPREFCLK)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
index cd4c45516616..13296c6ec08f 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c
@@ -127,7 +127,7 @@ void dce110_fill_display_configs(
pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);
pp_display_cfg->disp_clk_khz = dc->clk_mgr->clks.dispclk_khz;
pp_display_cfg->avail_mclk_switch_time_in_disp_active_us = 0;
- pp_display_cfg->crtc_index = dc->res_pool->res_cap->num_timing_generator;
+ pp_display_cfg->crtc_index = (uint8_t)dc->res_pool->res_cap->num_timing_generator;
for (j = 0; j < context->stream_count; j++) {
int k;
@@ -151,7 +151,7 @@ void dce110_fill_display_configs(
num_cfgs++;
cfg->signal = pipe_ctx->stream->signal;
- cfg->pipe_idx = pipe_ctx->stream_res.tg->inst;
+ cfg->pipe_idx = (uint8_t)pipe_ctx->stream_res.tg->inst;
cfg->src_height = stream->src.height;
cfg->src_width = stream->src.width;
cfg->ddi_channel_mapping =
@@ -189,7 +189,7 @@ void dce110_fill_display_configs(
pp_display_cfg->line_time_in_us = 0;
}
- pp_display_cfg->display_count = num_cfgs;
+ pp_display_cfg->display_count = (uint8_t)num_cfgs;
}
void dce11_pplib_apply_display_requirements(
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 09e83097a623..79eb5ae8ec6f 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -462,8 +462,10 @@ static void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_sm
if (!bw_params->wm_table.entries[i].valid)
continue;
- ranges->reader_wm_sets[num_valid_sets].wm_inst = bw_params->wm_table.entries[i].wm_inst;
- ranges->reader_wm_sets[num_valid_sets].wm_type = bw_params->wm_table.entries[i].wm_type;
+ ranges->reader_wm_sets[num_valid_sets].wm_inst =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ ranges->reader_wm_sets[num_valid_sets].wm_type =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
ranges->reader_wm_sets[num_valid_sets].min_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
ranges->reader_wm_sets[num_valid_sets].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
@@ -476,7 +478,8 @@ static void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_sm
/* add 1 to make it non-overlapping with next lvl */
ranges->reader_wm_sets[num_valid_sets].min_drain_clk_mhz = bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
- ranges->reader_wm_sets[num_valid_sets].max_drain_clk_mhz = bw_params->clk_table.entries[i].dcfclk_mhz;
+ ranges->reader_wm_sets[num_valid_sets].max_drain_clk_mhz =
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
index c7c849b04a50..bf7f92fd41d7 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c
@@ -78,9 +78,9 @@ static const struct clk_mgr_mask clk_mgr_mask = {
/* Query SMU for all clock states for a particular clock */
-static void dcn3_init_single_clock(struct clk_mgr_internal *clk_mgr, uint32_t clk, unsigned int *entry_0, unsigned int *num_levels)
+static void dcn3_init_single_clock(struct clk_mgr_internal *clk_mgr, uint32_t clk, unsigned int *entry_0, uint8_t *num_levels)
{
- unsigned int i;
+ uint8_t i;
char *entry_i = (char *)entry_0;
uint32_t ret = dcn30_smu_get_dpm_freq_by_index(clk_mgr, clk, 0xFF);
@@ -109,7 +109,7 @@ static void dcn3_build_wm_range_table(struct clk_mgr_internal *clk_mgr)
void dcn3_init_clocks(struct clk_mgr *clk_mgr_base)
{
struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
- unsigned int num_levels;
+ uint8_t num_levels;
memset(&(clk_mgr_base->clks), 0, sizeof(struct dc_clocks));
clk_mgr_base->clks.p_state_change_support = true;
@@ -234,7 +234,7 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz)) {
clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
- dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCEFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
+ dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCEFCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
}
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz)) {
@@ -265,10 +265,11 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
if (dc->clk_mgr->dc_mode_softmax_enabled &&
new_clocks->dramclk_khz <= dc->clk_mgr->bw_params->dc_mode_softmax_memclk * 1000)
dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- dc->clk_mgr->bw_params->dc_mode_softmax_memclk);
+ (uint16_t)dc->clk_mgr->bw_params->dc_mode_softmax_memclk);
else
dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+ (uint16_t)clk_mgr_base->bw_params->clk_table.entries[
+ clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
}
}
@@ -281,20 +282,20 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
/* set UCLK to requested value if P-State switching is supported, or to re-enable P-State switching */
if (clk_mgr_base->clks.p_state_change_support &&
(update_uclk || !clk_mgr_base->clks.prev_p_state_change_support))
- dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+ dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
if (should_set_clock(safe_to_lower, new_clocks->dppclk_khz, clk_mgr_base->clks.dppclk_khz)) {
if (clk_mgr_base->clks.dppclk_khz > new_clocks->dppclk_khz)
dpp_clock_lowered = true;
clk_mgr_base->clks.dppclk_khz = new_clocks->dppclk_khz;
- dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_PIXCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
+ dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_PIXCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dppclk_khz));
update_dppclk = true;
}
if (should_set_clock(safe_to_lower, new_clocks->dispclk_khz, clk_mgr_base->clks.dispclk_khz)) {
clk_mgr_base->clks.dispclk_khz = new_clocks->dispclk_khz;
- dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
+ dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dispclk_khz));
update_dispclk = true;
}
@@ -323,7 +324,7 @@ static void dcn3_update_clocks(struct clk_mgr *clk_mgr_base,
static void dcn3_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
{
- unsigned int i;
+ uint8_t i;
struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
WatermarksExternal_t *table = (WatermarksExternal_t *) clk_mgr->wm_range_table;
@@ -363,13 +364,14 @@ static void dcn3_set_hard_min_memclk(struct clk_mgr *clk_mgr_base, bool current_
if (current_mode) {
if (clk_mgr_base->clks.p_state_change_support)
dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+ (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
else
dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+ (uint16_t)clk_mgr_base->bw_params->clk_table.entries[
+ clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
} else {
dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
+ (uint16_t)clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
}
}
@@ -382,7 +384,8 @@ static void dcn3_set_hard_max_memclk(struct clk_mgr *clk_mgr_base)
return;
dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK,
- clk_mgr_base->bw_params->clk_table.entries[clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
+ (uint16_t)clk_mgr_base->bw_params->clk_table.entries[
+ clk_mgr_base->bw_params->clk_table.num_entries - 1].memclk_mhz);
}
static void dcn3_set_max_memclk(struct clk_mgr *clk_mgr_base, unsigned int memclk_mhz)
@@ -392,7 +395,7 @@ static void dcn3_set_max_memclk(struct clk_mgr *clk_mgr_base, unsigned int memcl
if (!clk_mgr->smu_present)
return;
- dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, memclk_mhz);
+ dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)memclk_mhz);
}
static void dcn3_set_min_memclk(struct clk_mgr *clk_mgr_base, unsigned int memclk_mhz)
{
@@ -400,14 +403,14 @@ static void dcn3_set_min_memclk(struct clk_mgr *clk_mgr_base, unsigned int memcl
if (!clk_mgr->smu_present)
return;
- dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, memclk_mhz);
+ dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)memclk_mhz);
}
/* Get current memclk states, update bounding box */
static void dcn3_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
{
struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
- unsigned int num_levels;
+ uint8_t num_levels;
if (!clk_mgr->smu_present)
return;
@@ -480,7 +483,7 @@ static void dcn30_notify_link_rate_change(struct clk_mgr *clk_mgr_base, struct d
if (max_phyclk_req != clk_mgr_base->clks.phyclk_khz) {
clk_mgr_base->clks.phyclk_khz = max_phyclk_req;
- dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_PHYCLK, khz_to_mhz_ceil(clk_mgr_base->clks.phyclk_khz));
+ dcn30_smu_set_hard_min_by_freq(clk_mgr, PPCLK_PHYCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.phyclk_khz));
}
}
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
index 57ba7bc4d16e..caa15cfba7c3 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
@@ -385,7 +385,7 @@ static void vg_init_clocks(struct clk_mgr *clk_mgr)
static void vg_build_watermark_ranges(struct clk_bw_params *bw_params, struct watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -394,8 +394,11 @@ static void vg_build_watermark_ranges(struct clk_bw_params *bw_params, struct wa
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -409,7 +412,7 @@ static void vg_build_watermark_ranges(struct clk_bw_params *bw_params, struct wa
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
index 89fc482947ef..1d94c4bae9de 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
@@ -114,7 +114,7 @@ static int dcn31_get_active_display_cnt_wa(
static void dcn31_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
- int i;
+ uint8_t i;
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
@@ -424,7 +424,7 @@ static struct dcn31_watermarks dummy_wms = { 0 };
static void dcn31_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn31_watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -433,8 +433,10 @@ static void dcn31_build_watermark_ranges(struct clk_bw_params *bw_params, struct
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -448,7 +450,7 @@ static void dcn31_build_watermark_ranges(struct clk_bw_params *bw_params, struct
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
index b08a70a2f571..1814ec248dab 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
@@ -149,7 +149,7 @@ static void dcn314_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state
bool safe_to_lower, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
- int i;
+ uint8_t i;
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
struct pipe_ctx *pipe = safe_to_lower
@@ -495,7 +495,7 @@ static struct dcn314_ss_info_table ss_info_table = {
static void dcn314_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn314_watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -504,8 +504,10 @@ static void dcn314_build_watermark_ranges(struct clk_bw_params *bw_params, struc
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -519,7 +521,7 @@ static void dcn314_build_watermark_ranges(struct clk_bw_params *bw_params, struc
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
index 3a651c1a866d..382e1b891c47 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
@@ -100,7 +100,7 @@ static bool should_disable_otg(struct pipe_ctx *pipe)
static void dcn315_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
- int i;
+ uint8_t i;
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
@@ -384,7 +384,7 @@ static struct dcn315_watermarks dummy_wms = { 0 };
static void dcn315_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn315_watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -393,8 +393,11 @@ static void dcn315_build_watermark_ranges(struct clk_bw_params *bw_params, struc
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -408,7 +411,7 @@ static void dcn315_build_watermark_ranges(struct clk_bw_params *bw_params, struc
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
index e9d492d8c8d4..a162a453447c 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
@@ -103,7 +103,7 @@ static void dcn316_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state
bool safe_to_lower, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
- int i;
+ uint8_t i;
for (i = 0; i < dc->res_pool->pipe_count; ++i) {
struct pipe_ctx *pipe = safe_to_lower
@@ -350,7 +350,7 @@ static struct dcn316_watermarks dummy_wms = { 0 };
static void dcn316_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn316_watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -359,8 +359,11 @@ static void dcn316_build_watermark_ranges(struct clk_bw_params *bw_params, struc
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -374,7 +377,7 @@ static void dcn316_build_watermark_ranges(struct clk_bw_params *bw_params, struc
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
index f427154a54f8..8773a8321735 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c
@@ -132,7 +132,7 @@ static const struct clk_mgr_mask clk_mgr_mask_dcn321 = {
static void dcn32_init_single_clock(struct clk_mgr_internal *clk_mgr, PPCLK_e clk, unsigned int *entry_0,
unsigned int *num_levels)
{
- unsigned int i;
+ uint8_t i;
char *entry_i = (char *)entry_0;
uint32_t ret = dcn30_smu_get_dpm_freq_by_index(clk_mgr, clk, 0xFF);
@@ -409,7 +409,7 @@ static void dcn32_update_clocks_update_dentist(
* floored in Mhz to describe the intended clock.
*/
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK,
- khz_to_mhz_floor(temp_dispclk_khz));
+ (uint16_t)khz_to_mhz_floor(temp_dispclk_khz));
if (dc->debug.override_dispclk_programming) {
REG_GET(DENTIST_DISPCLK_CNTL,
@@ -456,7 +456,7 @@ static void dcn32_update_clocks_update_dentist(
* floored in Mhz to describe the intended clock.
*/
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DISPCLK,
- khz_to_mhz_floor(clk_mgr->base.clks.dispclk_khz));
+ (uint16_t)khz_to_mhz_floor(clk_mgr->base.clks.dispclk_khz));
if (dc->debug.override_dispclk_programming) {
REG_GET(DENTIST_DISPCLK_CNTL,
@@ -680,7 +680,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_khz, clk_mgr_base->clks.dcfclk_khz) &&
!dc->work_arounds.clock_update_disable_mask.dcfclk) {
clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
- dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
+ dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DCFCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz));
}
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz) &&
@@ -715,13 +715,13 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
* frequency.
*/
if (dc->debug.disable_dc_mode_overwrite) {
- dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, dc->clk_mgr->bw_params->max_memclk_mhz);
- dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, dc->clk_mgr->bw_params->max_memclk_mhz);
+ dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)dc->clk_mgr->bw_params->max_memclk_mhz);
+ dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)dc->clk_mgr->bw_params->max_memclk_mhz);
} else
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- dc->clk_mgr->bw_params->dc_mode_softmax_memclk);
+ (uint16_t)dc->clk_mgr->bw_params->dc_mode_softmax_memclk);
} else {
- dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, dc->clk_mgr->bw_params->max_memclk_mhz);
+ dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)dc->clk_mgr->bw_params->max_memclk_mhz);
}
}
}
@@ -755,9 +755,10 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
!dc->work_arounds.clock_update_disable_mask.uclk) {
if (dc->clk_mgr->dc_mode_softmax_enabled && dc->debug.disable_dc_mode_overwrite)
dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK,
- max((int)dc->clk_mgr->bw_params->dc_mode_softmax_memclk, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz)));
+ (uint16_t)max((int)dc->clk_mgr->bw_params->dc_mode_softmax_memclk,
+ khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz)));
- dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+ dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
}
if (clk_mgr_base->clks.num_ways != new_clocks->num_ways &&
@@ -783,7 +784,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
* floored in Mhz to describe the intended clock.
*/
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK,
- khz_to_mhz_floor(clk_mgr_base->clks.dppclk_khz));
+ (uint16_t)khz_to_mhz_floor(clk_mgr_base->clks.dppclk_khz));
update_dppclk = true;
}
@@ -803,7 +804,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
should_set_clock(safe_to_lower, new_clocks->ref_dtbclk_khz / 1000, clk_mgr_base->clks.ref_dtbclk_khz / 1000)) {
/* DCCG requires KHz precision for DTBCLK */
clk_mgr_base->clks.ref_dtbclk_khz =
- dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DTBCLK, khz_to_mhz_ceil(new_clocks->ref_dtbclk_khz));
+ dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DTBCLK, (uint16_t)khz_to_mhz_ceil(new_clocks->ref_dtbclk_khz));
dcn32_update_clocks_update_dtb_dto(clk_mgr, context, clk_mgr_base->clks.ref_dtbclk_khz);
}
@@ -822,7 +823,7 @@ static void dcn32_update_clocks(struct clk_mgr *clk_mgr_base,
* floored in Mhz to describe the intended clock.
*/
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_DPPCLK,
- khz_to_mhz_floor(clk_mgr_base->clks.dppclk_khz));
+ (uint16_t)khz_to_mhz_floor(clk_mgr_base->clks.dppclk_khz));
} else {
/* if clock is being raised, increase refclk before lowering DTO */
if (update_dppclk || update_dispclk)
@@ -968,7 +969,7 @@ static void dcn32_clock_read_ss_info(struct clk_mgr_internal *clk_mgr)
}
static void dcn32_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
{
- unsigned int i;
+ uint8_t i;
struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
WatermarksExternal_t *table = (WatermarksExternal_t *) clk_mgr->wm_range_table;
@@ -1002,13 +1003,13 @@ static void dcn32_set_hard_min_memclk(struct clk_mgr *clk_mgr_base, bool current
if (current_mode) {
if (clk_mgr_base->clks.p_state_change_support)
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
+ (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dramclk_khz));
else
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- clk_mgr_base->bw_params->max_memclk_mhz);
+ (uint16_t)clk_mgr_base->bw_params->max_memclk_mhz);
} else {
dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK,
- clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
+ (uint16_t)clk_mgr_base->bw_params->clk_table.entries[0].memclk_mhz);
}
}
@@ -1020,7 +1021,7 @@ static void dcn32_set_hard_max_memclk(struct clk_mgr *clk_mgr_base)
if (!clk_mgr->smu_present)
return;
- dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, clk_mgr_base->bw_params->max_memclk_mhz);
+ dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)clk_mgr_base->bw_params->max_memclk_mhz);
}
/* Get current memclk states, update bounding box */
@@ -1052,7 +1053,7 @@ static void dcn32_get_memclk_states_from_smu(struct clk_mgr *clk_mgr_base)
clk_mgr_base->bw_params->max_memclk_mhz =
clk_mgr_base->bw_params->clk_table.entries[num_entries_per_clk->num_memclk_levels - 1].memclk_mhz;
- clk_mgr_base->bw_params->clk_table.num_entries = num_levels ? num_levels : 1;
+ clk_mgr_base->bw_params->clk_table.num_entries = (uint8_t)(num_levels ? num_levels : 1);
if (clk_mgr->dpm_present && !num_levels)
clk_mgr->dpm_present = false;
@@ -1109,7 +1110,7 @@ static void dcn32_set_max_memclk(struct clk_mgr *clk_mgr_base, unsigned int memc
if (!clk_mgr->smu_present)
return;
- dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, memclk_mhz);
+ dcn30_smu_set_hard_max_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)memclk_mhz);
}
static void dcn32_set_min_memclk(struct clk_mgr *clk_mgr_base, unsigned int memclk_mhz)
@@ -1119,7 +1120,7 @@ static void dcn32_set_min_memclk(struct clk_mgr *clk_mgr_base, unsigned int memc
if (!clk_mgr->smu_present)
return;
- dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, memclk_mhz);
+ dcn32_smu_set_hard_min_by_freq(clk_mgr, PPCLK_UCLK, (uint16_t)memclk_mhz);
}
static struct clk_mgr_funcs dcn32_funcs = {
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
index 2798088842f4..688a4bdc20b5 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
@@ -190,7 +190,7 @@ void dcn35_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *context
bool safe_to_lower, bool disable)
{
struct dc *dc = clk_mgr_base->ctx->dc;
- int i;
+ uint8_t i;
if (dc->ctx->dce_environment == DCE_ENV_DIAG)
return;
@@ -332,7 +332,7 @@ static uint8_t get_lowest_dpia_index(const struct dc_link *link)
continue;
if (idx > dc_struct->links[i]->link_index)
- idx = dc_struct->links[i]->link_index;
+ idx = (uint8_t)dc_struct->links[i]->link_index;
}
return idx;
@@ -863,7 +863,7 @@ static void dcn35_read_ss_info_from_lut(struct clk_mgr_internal *clk_mgr)
static void dcn35_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn35_watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -872,8 +872,10 @@ static void dcn35_build_watermark_ranges(struct clk_bw_params *bw_params, struct
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -887,7 +889,7 @@ static void dcn35_build_watermark_ranges(struct clk_bw_params *bw_params, struct
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
index fa284523d8a4..2b7718336135 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn401/dcn401_clk_mgr.c
@@ -174,7 +174,7 @@ static void dcn401_init_single_clock(struct clk_mgr_internal *clk_mgr, PPCLK_e c
/* if the initial message failed, num_levels will be 0 */
for (i = 0; i < *num_levels && i < ARRAY_SIZE(clk_mgr->base.bw_params->clk_table.entries); i++) {
- *((unsigned int *)entry_i) = (dcn401_smu_get_dpm_freq_by_index(clk_mgr, clk, i) & 0xFFFF);
+ *((unsigned int *)entry_i) = (dcn401_smu_get_dpm_freq_by_index(clk_mgr, clk, (uint8_t)i) & 0xFFFF);
entry_i += sizeof(clk_mgr->base.bw_params->clk_table.entries[0]);
}
}
@@ -182,15 +182,15 @@ static void dcn401_init_single_clock(struct clk_mgr_internal *clk_mgr, PPCLK_e c
static void dcn401_build_wm_range_table(struct clk_mgr *clk_mgr)
{
/* For min clocks use as reported by PM FW and report those as min */
- uint16_t min_uclk_mhz = clk_mgr->bw_params->clk_table.entries[0].memclk_mhz;
- uint16_t min_dcfclk_mhz = clk_mgr->bw_params->clk_table.entries[0].dcfclk_mhz;
+ unsigned int min_uclk_mhz = clk_mgr->bw_params->clk_table.entries[0].memclk_mhz;
+ unsigned int min_dcfclk_mhz = clk_mgr->bw_params->clk_table.entries[0].dcfclk_mhz;
/* Set A - Normal - default values */
clk_mgr->bw_params->wm_table.nv_entries[WM_A].valid = true;
clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.wm_type = WATERMARKS_CLOCK_RANGE;
- clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.min_dcfclk = min_dcfclk_mhz;
+ clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.min_dcfclk = (uint16_t)min_dcfclk_mhz;
clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.max_dcfclk = 0xFFFF;
- clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.min_uclk = min_uclk_mhz;
+ clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.min_uclk = (uint16_t)min_uclk_mhz;
clk_mgr->bw_params->wm_table.nv_entries[WM_A].pmfw_breakdown.max_uclk = 0xFFFF;
/* Set B - Unused on dcn4 */
@@ -201,9 +201,9 @@ static void dcn401_build_wm_range_table(struct clk_mgr *clk_mgr)
if (clk_mgr->ctx->dc->bb_overrides.dummy_clock_change_latency_ns != 0x7FFFFFFF) {
clk_mgr->bw_params->wm_table.nv_entries[WM_1A].valid = true;
clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.wm_type = WATERMARKS_DUMMY_PSTATE;
- clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.min_dcfclk = min_dcfclk_mhz;
+ clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.min_dcfclk = (uint16_t)min_dcfclk_mhz;
clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.max_dcfclk = 0xFFFF;
- clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.min_uclk = min_uclk_mhz;
+ clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.min_uclk = (uint16_t)min_uclk_mhz;
clk_mgr->bw_params->wm_table.nv_entries[WM_1A].pmfw_breakdown.max_uclk = 0xFFFF;
} else {
clk_mgr->bw_params->wm_table.nv_entries[WM_1A].valid = false;
@@ -604,10 +604,10 @@ static int dcn401_set_hard_min_by_freq_optimized(struct clk_mgr_internal *clk_mg
* clock returned is less than requested, then we will ceil the
* requested value to mhz and call it again.
*/
- int actual_clk_khz = dcn401_smu_set_hard_min_by_freq(clk_mgr, clk, khz_to_mhz_floor(requested_clk_khz));
+ int actual_clk_khz = dcn401_smu_set_hard_min_by_freq(clk_mgr, clk, (uint16_t)khz_to_mhz_floor(requested_clk_khz));
if (actual_clk_khz < requested_clk_khz)
- actual_clk_khz = dcn401_smu_set_hard_min_by_freq(clk_mgr, clk, khz_to_mhz_ceil(requested_clk_khz));
+ actual_clk_khz = dcn401_smu_set_hard_min_by_freq(clk_mgr, clk, (uint16_t)khz_to_mhz_ceil(requested_clk_khz));
return actual_clk_khz;
}
@@ -849,7 +849,7 @@ static unsigned int dcn401_build_update_bandwidth_clocks_sequence(
clk_mgr_base->clks.dcfclk_khz = new_clocks->dcfclk_khz;
if (dcn401_is_ppclk_dpm_enabled(clk_mgr_internal, PPCLK_DCFCLK)) {
block_sequence[num_steps].params.update_hardmin_params.ppclk = PPCLK_DCFCLK;
- block_sequence[num_steps].params.update_hardmin_params.freq_mhz = khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz);
+ block_sequence[num_steps].params.update_hardmin_params.freq_mhz = (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_khz);
block_sequence[num_steps].params.update_hardmin_params.response = NULL;
block_sequence[num_steps].func = CLK_MGR401_UPDATE_HARDMIN_PPCLK;
num_steps++;
@@ -860,7 +860,7 @@ static unsigned int dcn401_build_update_bandwidth_clocks_sequence(
if (should_set_clock(safe_to_lower, new_clocks->dcfclk_deep_sleep_khz, clk_mgr_base->clks.dcfclk_deep_sleep_khz)) {
clk_mgr_base->clks.dcfclk_deep_sleep_khz = new_clocks->dcfclk_deep_sleep_khz;
if (dcn401_is_ppclk_dpm_enabled(clk_mgr_internal, PPCLK_DCFCLK)) {
- block_sequence[num_steps].params.update_deep_sleep_dcfclk_params.freq_mhz = khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_deep_sleep_khz);
+ block_sequence[num_steps].params.update_deep_sleep_dcfclk_params.freq_mhz = (uint16_t)khz_to_mhz_ceil(clk_mgr_base->clks.dcfclk_deep_sleep_khz);
block_sequence[num_steps].func = CLK_MGR401_UPDATE_DEEP_SLEEP_DCFCLK;
num_steps++;
}
@@ -984,24 +984,24 @@ static unsigned int dcn401_build_update_bandwidth_clocks_sequence(
/* When idle DPM is enabled, need to send active and idle hardmins separately */
/* CLK_MGR401_UPDATE_ACTIVE_HARDMINS */
if ((update_active_uclk || update_active_fclk) && is_idle_dpm_enabled) {
- block_sequence[num_steps].params.update_idle_hardmin_params.uclk_mhz = active_uclk_mhz;
- block_sequence[num_steps].params.update_idle_hardmin_params.fclk_mhz = active_fclk_mhz;
+ block_sequence[num_steps].params.update_idle_hardmin_params.uclk_mhz = (uint16_t)active_uclk_mhz;
+ block_sequence[num_steps].params.update_idle_hardmin_params.fclk_mhz = (uint16_t)active_fclk_mhz;
block_sequence[num_steps].func = CLK_MGR401_UPDATE_ACTIVE_HARDMINS;
num_steps++;
}
/* CLK_MGR401_UPDATE_IDLE_HARDMINS */
if ((update_idle_uclk || update_idle_fclk) && is_idle_dpm_enabled) {
- block_sequence[num_steps].params.update_idle_hardmin_params.uclk_mhz = idle_uclk_mhz;
- block_sequence[num_steps].params.update_idle_hardmin_params.fclk_mhz = idle_fclk_mhz;
+ block_sequence[num_steps].params.update_idle_hardmin_params.uclk_mhz = (uint16_t)idle_uclk_mhz;
+ block_sequence[num_steps].params.update_idle_hardmin_params.fclk_mhz = (uint16_t)idle_fclk_mhz;
block_sequence[num_steps].func = CLK_MGR401_UPDATE_IDLE_HARDMINS;
num_steps++;
}
/* CLK_MGR401_UPDATE_SUBVP_HARDMINS */
if ((update_subvp_prefetch_dramclk || update_subvp_prefetch_fclk) && is_df_throttle_opt_enabled) {
- block_sequence[num_steps].params.update_idle_hardmin_params.uclk_mhz = subvp_prefetch_dramclk_mhz;
- block_sequence[num_steps].params.update_idle_hardmin_params.fclk_mhz = subvp_prefetch_fclk_mhz;
+ block_sequence[num_steps].params.update_idle_hardmin_params.uclk_mhz = (uint16_t)subvp_prefetch_dramclk_mhz;
+ block_sequence[num_steps].params.update_idle_hardmin_params.fclk_mhz = (uint16_t)subvp_prefetch_fclk_mhz;
block_sequence[num_steps].func = CLK_MGR401_UPDATE_SUBVP_HARDMINS;
num_steps++;
}
@@ -1010,7 +1010,7 @@ static unsigned int dcn401_build_update_bandwidth_clocks_sequence(
if (update_active_uclk || update_idle_uclk) {
if (!is_idle_dpm_enabled) {
block_sequence[num_steps].params.update_hardmin_params.ppclk = PPCLK_UCLK;
- block_sequence[num_steps].params.update_hardmin_params.freq_mhz = active_uclk_mhz;
+ block_sequence[num_steps].params.update_hardmin_params.freq_mhz = (uint16_t)active_uclk_mhz;
block_sequence[num_steps].params.update_hardmin_params.response = NULL;
block_sequence[num_steps].func = CLK_MGR401_UPDATE_HARDMIN_PPCLK;
num_steps++;
@@ -1123,7 +1123,7 @@ static unsigned int dcn401_build_update_display_clocks_sequence(
dcn401_is_ppclk_dpm_enabled(clk_mgr_internal, PPCLK_DTBCLK)) {
/* DCCG requires KHz precision for DTBCLK */
block_sequence[num_steps].params.update_hardmin_params.ppclk = PPCLK_DTBCLK;
- block_sequence[num_steps].params.update_hardmin_params.freq_mhz = khz_to_mhz_ceil(new_clocks->ref_dtbclk_khz);
+ block_sequence[num_steps].params.update_hardmin_params.freq_mhz = (uint16_t)khz_to_mhz_ceil(new_clocks->ref_dtbclk_khz);
block_sequence[num_steps].params.update_hardmin_params.response = &clk_mgr_base->clks.ref_dtbclk_khz;
block_sequence[num_steps].func = CLK_MGR401_UPDATE_HARDMIN_PPCLK;
num_steps++;
@@ -1318,7 +1318,7 @@ static void dcn401_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
/* collect valid ranges, place in pmfw table */
for (i = 0; i < WM_SET_COUNT; i++)
if (clk_mgr->base.bw_params->wm_table.nv_entries[i].valid) {
- table->Watermarks.WatermarkRow[i].WmSetting = i;
+ table->Watermarks.WatermarkRow[i].WmSetting = (uint8_t)i;
table->Watermarks.WatermarkRow[i].Flags = clk_mgr->base.bw_params->wm_table.nv_entries[i].pmfw_breakdown.wm_type;
}
dcn401_smu_set_dram_addr_high(clk_mgr, clk_mgr->wm_range_table_addr >> 32);
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
index 6a97ce69a562..72b0f3f8c2fd 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
@@ -634,7 +634,7 @@ static void dcn42_read_ss_info_from_lut(struct clk_mgr_internal *clk_mgr)
void dcn42_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn42_watermarks *table)
{
- int i, num_valid_sets;
+ uint8_t i, num_valid_sets;
num_valid_sets = 0;
@@ -643,8 +643,10 @@ void dcn42_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn42_
if (!bw_params->wm_table.entries[i].valid)
continue;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting = bw_params->wm_table.entries[i].wm_inst;
- table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType = bw_params->wm_table.entries[i].wm_type;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmSetting =
+ (uint8_t)bw_params->wm_table.entries[i].wm_inst;
+ table->WatermarkRow[WM_DCFCLK][num_valid_sets].WmType =
+ (uint8_t)bw_params->wm_table.entries[i].wm_type;
/* We will not select WM based on fclk, so leave it as unconstrained */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinClock = 0;
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxClock = 0xFFFF;
@@ -658,7 +660,7 @@ void dcn42_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn42_
bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
- bw_params->clk_table.entries[i].dcfclk_mhz;
+ (uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
} else {
/* unconstrained for memory retraining */
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 534f770949d5..9ff5503d4df7 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -235,7 +235,7 @@ static bool create_links(
* variants of the same card.
*/
for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) {
- struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i);
+ struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, (uint8_t)i);
struct link_init_data link_init_params = {0};
struct dc_link *link;
@@ -246,7 +246,7 @@ static bool create_links(
link_init_params.ctx = dc->ctx;
/* next BIOS object table connector */
- link_init_params.connector_index = i;
+ link_init_params.connector_index = (uint8_t)i;
link_init_params.link_index = dc->link_count;
link_init_params.dc = dc;
link = dc->link_srv->create_link(&link_init_params);
@@ -267,7 +267,7 @@ static bool create_links(
struct dc_link *link;
link_init_params.ctx = dc->ctx;
- link_init_params.connector_index = i;
+ link_init_params.connector_index = (uint8_t)i;
link_init_params.link_index = dc->link_count;
link_init_params.dc = dc;
link_init_params.is_dpia_link = true;
@@ -659,12 +659,12 @@ bool dc_stream_configure_crc(struct dc *dc, struct dc_stream_state *stream,
/* By default, capture the full frame */
param.windowa_x_start = 0;
param.windowa_y_start = 0;
- param.windowa_x_end = pipe->stream->timing.h_addressable;
- param.windowa_y_end = pipe->stream->timing.v_addressable;
+ param.windowa_x_end = (uint16_t)pipe->stream->timing.h_addressable;
+ param.windowa_y_end = (uint16_t)pipe->stream->timing.v_addressable;
param.windowb_x_start = 0;
param.windowb_y_start = 0;
- param.windowb_x_end = pipe->stream->timing.h_addressable;
- param.windowb_y_end = pipe->stream->timing.v_addressable;
+ param.windowb_x_end = (uint16_t)pipe->stream->timing.h_addressable;
+ param.windowb_y_end = (uint16_t)pipe->stream->timing.v_addressable;
param.crc_poly_mode = crc_poly_mode;
if (crc_window) {
@@ -2137,7 +2137,7 @@ static uint8_t get_stream_mask(struct dc *dc, struct dc_state *context)
stream_mask |= 1 << i;
}
- return stream_mask;
+ return (uint8_t)stream_mask;
}
void dc_z10_restore(const struct dc *dc)
@@ -2482,7 +2482,7 @@ enum dc_status dc_commit_streams(struct dc *dc, struct dc_commit_streams_params
set[i].stream = stream;
if (status) {
- set[i].plane_count = status->plane_count;
+ set[i].plane_count = (uint8_t)status->plane_count;
for (j = 0; j < status->plane_count; j++)
set[i].plane_states[j] = status->plane_states[j];
}
@@ -2541,7 +2541,7 @@ enum dc_status dc_commit_streams(struct dc *dc, struct dc_commit_streams_params
for (i = 0; i < params->stream_count; i++) {
for (j = 0; j < context->stream_count; j++) {
if (params->streams[i]->stream_id == context->streams[j]->stream_id)
- params->streams[i]->out.otg_offset = context->stream_status[j].primary_otg_inst;
+ params->streams[i]->out.otg_offset = (uint8_t)context->stream_status[j].primary_otg_inst;
if (dc_is_embedded_signal(params->streams[i]->signal)) {
struct dc_stream_status *status = dc_state_get_stream_status(context, params->streams[i]);
@@ -2669,7 +2669,7 @@ void dc_post_update_surfaces_to_stream(struct dc *dc)
for (i = 0; i < dc->res_pool->pipe_count; i++)
if (context->res_ctx.pipe_ctx[i].stream == NULL ||
context->res_ctx.pipe_ctx[i].plane_state == NULL) {
- context->res_ctx.pipe_ctx[i].pipe_idx = i;
+ context->res_ctx.pipe_ctx[i].pipe_idx = (uint8_t)i;
dc->hwss.disable_plane(dc, context, &context->res_ctx.pipe_ctx[i]);
}
@@ -3219,10 +3219,10 @@ static void copy_surface_update_to_plane(
surface->flip_immediate =
srf_update->flip_addr->flip_immediate;
surface->time.time_elapsed_in_us[surface->time.index] =
- srf_update->flip_addr->flip_timestamp_in_us -
- surface->time.prev_update_time_in_us;
+ (unsigned int)(srf_update->flip_addr->flip_timestamp_in_us -
+ surface->time.prev_update_time_in_us);
surface->time.prev_update_time_in_us =
- srf_update->flip_addr->flip_timestamp_in_us;
+ (unsigned int)srf_update->flip_addr->flip_timestamp_in_us;
surface->time.index++;
if (surface->time.index >= DC_PLANE_UPDATE_TIMES_MAX)
surface->time.index = 0;
@@ -3998,7 +3998,7 @@ void dc_dmub_update_dirty_rect(struct dc *dc,
else
update_dirty_rect->cmd_version = DMUB_CMD_CURSOR_UPDATE_VERSION_1;
- update_dirty_rect->dirty_rect_count = flip_addr->dirty_rect_count;
+ update_dirty_rect->dirty_rect_count = (uint8_t)flip_addr->dirty_rect_count;
memcpy(update_dirty_rect->src_dirty_rects, flip_addr->dirty_rects,
sizeof(flip_addr->dirty_rects));
for (j = 0; j < dc->res_pool->pipe_count; j++) {
@@ -4009,9 +4009,9 @@ void dc_dmub_update_dirty_rect(struct dc *dc,
if (pipe_ctx->plane_state != plane_state)
continue;
- update_dirty_rect->panel_inst = panel_inst;
- update_dirty_rect->pipe_idx = j;
- update_dirty_rect->otg_inst = pipe_ctx->stream_res.tg->inst;
+ update_dirty_rect->panel_inst = (uint8_t)panel_inst;
+ update_dirty_rect->pipe_idx = (uint8_t)j;
+ update_dirty_rect->otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_NO_WAIT);
}
}
@@ -4059,7 +4059,7 @@ static void build_dmub_update_dirty_rect(
else
update_dirty_rect->cmd_version = DMUB_CMD_CURSOR_UPDATE_VERSION_1;
- update_dirty_rect->dirty_rect_count = flip_addr->dirty_rect_count;
+ update_dirty_rect->dirty_rect_count = (uint8_t)flip_addr->dirty_rect_count;
memcpy(update_dirty_rect->src_dirty_rects, flip_addr->dirty_rects,
sizeof(flip_addr->dirty_rects));
for (j = 0; j < dc->res_pool->pipe_count; j++) {
@@ -4069,9 +4069,9 @@ static void build_dmub_update_dirty_rect(
continue;
if (pipe_ctx->plane_state != plane_state)
continue;
- update_dirty_rect->panel_inst = panel_inst;
- update_dirty_rect->pipe_idx = j;
- update_dirty_rect->otg_inst = pipe_ctx->stream_res.tg->inst;
+ update_dirty_rect->panel_inst = (uint8_t)panel_inst;
+ update_dirty_rect->pipe_idx = (uint8_t)j;
+ update_dirty_rect->otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
dc_dmub_cmd[*dmub_cmd_count].dmub_cmd = cmd;
dc_dmub_cmd[*dmub_cmd_count].wait_type = DM_DMUB_WAIT_TYPE_NO_WAIT;
(*dmub_cmd_count)++;
@@ -4374,7 +4374,7 @@ static void commit_planes_for_stream(struct dc *dc,
struct dmub_hw_lock_inst_flags inst_flags = { 0 };
hw_locks.bits.lock_dig = 1;
- inst_flags.dig_inst = top_pipe_to_program->stream_res.tg->inst;
+ inst_flags.dig_inst = (uint8_t)top_pipe_to_program->stream_res.tg->inst;
dmub_hw_lock_mgr_cmd(dc->ctx->dmub_srv,
true,
@@ -4641,7 +4641,7 @@ static void commit_planes_for_stream(struct dc *dc,
struct dmub_hw_lock_inst_flags inst_flags = { 0 };
hw_locks.bits.lock_dig = 1;
- inst_flags.dig_inst = top_pipe_to_program->stream_res.tg->inst;
+ inst_flags.dig_inst = (uint8_t)top_pipe_to_program->stream_res.tg->inst;
dmub_hw_lock_mgr_cmd(dc->ctx->dmub_srv,
false,
@@ -6171,12 +6171,12 @@ bool dc_process_dmub_aux_transfer_async(struct dc *dc,
else
cmd.dp_aux_access.aux_control.type = AUX_CHANNEL_LEGACY_DDC;
- cmd.dp_aux_access.aux_control.instance = dc->links[link_index]->ddc_hw_inst;
+ cmd.dp_aux_access.aux_control.instance = (uint8_t)dc->links[link_index]->ddc_hw_inst;
cmd.dp_aux_access.aux_control.sw_crc_enabled = 0;
cmd.dp_aux_access.aux_control.timeout = 0;
cmd.dp_aux_access.aux_control.dpaux.address = payload->address;
cmd.dp_aux_access.aux_control.dpaux.is_i2c_over_aux = payload->i2c_over_aux;
- cmd.dp_aux_access.aux_control.dpaux.length = payload->length;
+ cmd.dp_aux_access.aux_control.dpaux.length = (uint8_t)payload->length;
/* set aux action */
if (payload->i2c_over_aux) {
@@ -6242,7 +6242,7 @@ bool dc_smart_power_oled_enable(const struct dc_link *link, bool enable, uint16_
}
if (pipe_ctx)
- otg_inst = pipe_ctx->stream_res.tg->inst;
+ otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
// before enable smart power OLED, we need to call set pipe for DMUB to set ABM config
if (enable) {
@@ -6259,11 +6259,11 @@ bool dc_smart_power_oled_enable(const struct dc_link *link, bool enable, uint16_
sizeof(struct dmub_rb_cmd_smart_power_oled_enable_data) - sizeof(struct dmub_cmd_header);
cmd.smart_power_oled_enable.header.ret_status = 1;
cmd.smart_power_oled_enable.data.enable = enable;
- cmd.smart_power_oled_enable.data.panel_inst = panel_inst;
+ cmd.smart_power_oled_enable.data.panel_inst = (uint8_t)panel_inst;
cmd.smart_power_oled_enable.data.peak_nits = peak_nits;
cmd.smart_power_oled_enable.data.otg_inst = otg_inst;
- cmd.smart_power_oled_enable.data.digfe_inst = link->link_enc->preferred_engine;
- cmd.smart_power_oled_enable.data.digbe_inst = link->link_enc->transmitter;
+ cmd.smart_power_oled_enable.data.digfe_inst = (uint8_t)link->link_enc->preferred_engine;
+ cmd.smart_power_oled_enable.data.digbe_inst = (uint8_t)link->link_enc->transmitter;
cmd.smart_power_oled_enable.data.debugcontrol = debug_control;
cmd.smart_power_oled_enable.data.triggerline = triggerline;
@@ -6294,7 +6294,7 @@ bool dc_smart_power_oled_get_max_cll(const struct dc_link *link, unsigned int *p
cmd.smart_power_oled_getmaxcll.header.payload_bytes = sizeof(cmd.smart_power_oled_getmaxcll.data);
cmd.smart_power_oled_getmaxcll.header.ret_status = 1;
- cmd.smart_power_oled_getmaxcll.data.input.panel_inst = panel_inst;
+ cmd.smart_power_oled_getmaxcll.data.input.panel_inst = (uint8_t)panel_inst;
// send cmd and wait for reply
status = dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY);
@@ -6352,7 +6352,7 @@ bool dc_process_dmub_set_config_async(struct dc *dc,
cmd.set_config_access.header.type = DMUB_CMD__DPIA;
cmd.set_config_access.header.sub_type = DMUB_CMD__DPIA_SET_CONFIG_ACCESS;
- cmd.set_config_access.set_config_control.instance = dc->links[link_index]->ddc_hw_inst;
+ cmd.set_config_access.set_config_control.instance = (uint8_t)dc->links[link_index]->ddc_hw_inst;
cmd.set_config_access.set_config_control.cmd_pkt.msg_type = payload->msg_type;
cmd.set_config_access.set_config_control.cmd_pkt.msg_data = payload->msg_data;
@@ -6396,7 +6396,7 @@ enum dc_status dc_process_dmub_set_mst_slots(const struct dc *dc,
cmd.set_mst_alloc_slots.header.type = DMUB_CMD__DPIA;
cmd.set_mst_alloc_slots.header.sub_type = DMUB_CMD__DPIA_MST_ALLOC_SLOTS;
- cmd.set_mst_alloc_slots.mst_slots_control.instance = dc->links[link_index]->ddc_hw_inst;
+ cmd.set_mst_alloc_slots.mst_slots_control.instance = (uint8_t)dc->links[link_index]->ddc_hw_inst;
cmd.set_mst_alloc_slots.mst_slots_control.mst_alloc_slots = mst_alloc_slots;
if (!dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY))
@@ -6436,7 +6436,7 @@ void dc_process_dmub_dpia_set_tps_notification(const struct dc *dc, uint32_t lin
cmd.set_tps_notification.header.type = DMUB_CMD__DPIA;
cmd.set_tps_notification.header.sub_type = DMUB_CMD__DPIA_SET_TPS_NOTIFICATION;
- cmd.set_tps_notification.tps_notification.instance = dc->links[link_index]->ddc_hw_inst;
+ cmd.set_tps_notification.tps_notification.instance = (uint8_t)dc->links[link_index]->ddc_hw_inst;
cmd.set_tps_notification.tps_notification.tps = tps;
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
index f8a6916bbd4d..a347d3ff5e6e 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
@@ -347,39 +347,39 @@ void get_surface_visual_confirm_color(
switch (pipe_ctx->plane_res.scl_data.format) {
case PIXEL_FORMAT_ARGB8888:
/* set border color to red */
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
if (pipe_ctx->plane_state->layer_index > 0) {
/* set border color to pink */
- color->color_b_cb = color_value;
- color->color_g_y = color_value * 0.5;
+ color->color_b_cb = (uint16_t)color_value;
+ color->color_g_y = (uint16_t)(color_value / 2);
}
break;
case PIXEL_FORMAT_ARGB2101010:
/* set border color to blue */
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
if (pipe_ctx->plane_state->layer_index > 0) {
/* set border color to cyan */
- color->color_g_y = color_value;
+ color->color_g_y = (uint16_t)color_value;
}
break;
case PIXEL_FORMAT_420BPP8:
/* set border color to green */
- color->color_g_y = color_value;
+ color->color_g_y = (uint16_t)color_value;
break;
case PIXEL_FORMAT_420BPP10:
/* set border color to yellow */
- color->color_g_y = color_value;
- color->color_r_cr = color_value;
+ color->color_g_y = (uint16_t)color_value;
+ color->color_r_cr = (uint16_t)color_value;
break;
case PIXEL_FORMAT_FP16:
/* set border color to white */
- color->color_r_cr = color_value;
- color->color_b_cb = color_value;
- color->color_g_y = color_value;
+ color->color_r_cr = (uint16_t)color_value;
+ color->color_b_cb = (uint16_t)color_value;
+ color->color_g_y = (uint16_t)color_value;
if (pipe_ctx->plane_state->layer_index > 0) {
/* set border color to orange */
- color->color_g_y = 0.22 * color_value;
+ color->color_g_y = (uint16_t)((color_value * 22) / 100);
color->color_b_cb = 0;
}
break;
@@ -405,21 +405,21 @@ void get_hdr_visual_confirm_color(
case PIXEL_FORMAT_ARGB2101010:
if (top_pipe_ctx->stream->out_transfer_func.tf == TRANSFER_FUNCTION_PQ) {
/* HDR10, ARGB2101010 - set border color to red */
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
} else if (top_pipe_ctx->stream->out_transfer_func.tf == TRANSFER_FUNCTION_GAMMA22) {
/* FreeSync 2 ARGB2101010 - set border color to pink */
- color->color_r_cr = color_value;
- color->color_b_cb = color_value;
+ color->color_r_cr = (uint16_t)color_value;
+ color->color_b_cb = (uint16_t)color_value;
} else
is_sdr = true;
break;
case PIXEL_FORMAT_FP16:
if (top_pipe_ctx->stream->out_transfer_func.tf == TRANSFER_FUNCTION_PQ) {
/* HDR10, FP16 - set border color to blue */
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
} else if (top_pipe_ctx->stream->out_transfer_func.tf == TRANSFER_FUNCTION_GAMMA22) {
/* FreeSync 2 HDR - set border color to green */
- color->color_g_y = color_value;
+ color->color_g_y = (uint16_t)color_value;
} else
is_sdr = true;
break;
@@ -430,9 +430,9 @@ void get_hdr_visual_confirm_color(
if (is_sdr) {
/* SDR - set border color to Gray */
- color->color_r_cr = color_value/2;
- color->color_b_cb = color_value/2;
- color->color_g_y = color_value/2;
+ color->color_r_cr = (uint16_t)(color_value / 2);
+ color->color_b_cb = (uint16_t)(color_value / 2);
+ color->color_g_y = (uint16_t)(color_value / 2);
}
}
@@ -456,7 +456,7 @@ void get_smartmux_visual_confirm_color(
*color = sm_ver_colors[dc->config.smart_mux_version];
} else {
/* dGPU driving the eDP - red */
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
color->color_g_y = 0;
color->color_b_cb = 0;
}
@@ -478,19 +478,19 @@ void get_vabc_visual_confirm_color(
if (edp_link) {
switch (edp_link->backlight_control_type) {
case BACKLIGHT_CONTROL_PWM:
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
color->color_g_y = 0;
color->color_b_cb = 0;
break;
case BACKLIGHT_CONTROL_AMD_AUX:
color->color_r_cr = 0;
- color->color_g_y = color_value;
+ color->color_g_y = (uint16_t)color_value;
color->color_b_cb = 0;
break;
case BACKLIGHT_CONTROL_VESA_AUX:
color->color_r_cr = 0;
color->color_g_y = 0;
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
break;
}
} else {
@@ -508,19 +508,19 @@ void get_subvp_visual_confirm_color(
if (pipe_ctx) {
switch (pipe_ctx->p_state_type) {
case P_STATE_SUB_VP:
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
color->color_g_y = 0;
color->color_b_cb = 0;
break;
case P_STATE_DRR_SUB_VP:
color->color_r_cr = 0;
- color->color_g_y = color_value;
+ color->color_g_y = (uint16_t)color_value;
color->color_b_cb = 0;
break;
case P_STATE_V_BLANK_SUB_VP:
color->color_r_cr = 0;
color->color_g_y = 0;
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
break;
default:
break;
@@ -537,34 +537,34 @@ void get_mclk_switch_visual_confirm_color(
if (pipe_ctx) {
switch (pipe_ctx->p_state_type) {
case P_STATE_V_BLANK:
- color->color_r_cr = color_value;
- color->color_g_y = color_value;
+ color->color_r_cr = (uint16_t)color_value;
+ color->color_g_y = (uint16_t)color_value;
color->color_b_cb = 0;
break;
case P_STATE_FPO:
color->color_r_cr = 0;
- color->color_g_y = color_value;
- color->color_b_cb = color_value;
+ color->color_g_y = (uint16_t)color_value;
+ color->color_b_cb = (uint16_t)color_value;
break;
case P_STATE_V_ACTIVE:
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
color->color_g_y = 0;
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
break;
case P_STATE_SUB_VP:
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
color->color_g_y = 0;
color->color_b_cb = 0;
break;
case P_STATE_DRR_SUB_VP:
color->color_r_cr = 0;
- color->color_g_y = color_value;
+ color->color_g_y = (uint16_t)color_value;
color->color_b_cb = 0;
break;
case P_STATE_V_BLANK_SUB_VP:
color->color_r_cr = 0;
color->color_g_y = 0;
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
break;
default:
break;
@@ -579,13 +579,13 @@ void get_cursor_visual_confirm_color(
uint32_t color_value = MAX_TG_COLOR_VALUE;
if (pipe_ctx->stream && pipe_ctx->stream->cursor_position.enable) {
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
color->color_g_y = 0;
color->color_b_cb = 0;
} else {
color->color_r_cr = 0;
color->color_g_y = 0;
- color->color_b_cb = color_value;
+ color->color_b_cb = (uint16_t)color_value;
}
}
@@ -723,9 +723,9 @@ void get_fams2_visual_confirm_color(
/* driver only handles visual confirm when FAMS2 is disabled */
if (!dc_state_is_fams2_in_use(dc, context)) {
/* when FAMS2 is disabled, all pipes are grey */
- color->color_g_y = color_value / 2;
- color->color_b_cb = color_value / 2;
- color->color_r_cr = color_value / 2;
+ color->color_g_y = (uint16_t)(color_value / 2);
+ color->color_b_cb = (uint16_t)(color_value / 2);
+ color->color_r_cr = (uint16_t)(color_value / 2);
}
}
@@ -2414,7 +2414,7 @@ void get_surface_tile_visual_confirm_color(
switch (bottom_pipe_ctx->plane_state->tiling_info.gfx9.swizzle) {
case DC_SW_LINEAR:
/* LINEAR Surface - set border color to red */
- color->color_r_cr = color_value;
+ color->color_r_cr = (uint16_t)color_value;
break;
default:
break;
@@ -4595,8 +4595,8 @@ void get_refresh_rate_confirm_color(struct pipe_ctx *pipe_ctx, struct tg_color *
if (max_refresh_rate - min_refresh_rate)
scaling_factor = MAX_TG_COLOR_VALUE * (refresh_rate - min_refresh_rate) / (max_refresh_rate - min_refresh_rate);
- pipe_ctx->visual_confirm_color.color_r_cr = color_value;
- pipe_ctx->visual_confirm_color.color_g_y = scaling_factor;
- pipe_ctx->visual_confirm_color.color_b_cb = color_value;
+ pipe_ctx->visual_confirm_color.color_r_cr = (uint16_t)color_value;
+ pipe_ctx->visual_confirm_color.color_g_y = (uint16_t)scaling_factor;
+ pipe_ctx->visual_confirm_color.color_b_cb = (uint16_t)color_value;
}
}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 00b894602423..20600455ff63 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -286,34 +286,34 @@ struct resource_pool *dc_create_resource_pool(struct dc *dc,
#endif
case DCE_VERSION_8_0:
res_pool = dce80_create_resource_pool(
- init_data->num_virtual_links, dc);
+ (uint8_t)init_data->num_virtual_links, dc);
break;
case DCE_VERSION_8_1:
res_pool = dce81_create_resource_pool(
- init_data->num_virtual_links, dc);
+ (uint8_t)init_data->num_virtual_links, dc);
break;
case DCE_VERSION_8_3:
res_pool = dce83_create_resource_pool(
- init_data->num_virtual_links, dc);
+ (uint8_t)init_data->num_virtual_links, dc);
break;
case DCE_VERSION_10_0:
res_pool = dce100_create_resource_pool(
- init_data->num_virtual_links, dc);
+ (uint8_t)init_data->num_virtual_links, dc);
break;
case DCE_VERSION_11_0:
res_pool = dce110_create_resource_pool(
- init_data->num_virtual_links, dc,
+ (uint8_t)init_data->num_virtual_links, dc,
init_data->asic_id);
break;
case DCE_VERSION_11_2:
case DCE_VERSION_11_22:
res_pool = dce112_create_resource_pool(
- init_data->num_virtual_links, dc);
+ (uint8_t)init_data->num_virtual_links, dc);
break;
case DCE_VERSION_12_0:
case DCE_VERSION_12_1:
res_pool = dce120_create_resource_pool(
- init_data->num_virtual_links, dc);
+ (uint8_t)init_data->num_virtual_links, dc);
break;
#if defined(CONFIG_DRM_AMD_DC_FP)
@@ -511,7 +511,7 @@ bool resource_construct(
pool->hpo_dp_link_enc_count = 0;
if (create_funcs->create_hpo_dp_link_encoder) {
for (i = 0; i < caps->num_hpo_dp_link_encoder; i++) {
- pool->hpo_dp_link_enc[i] = create_funcs->create_hpo_dp_link_encoder(i, ctx);
+ pool->hpo_dp_link_enc[i] = create_funcs->create_hpo_dp_link_encoder((uint8_t)i, ctx);
if (pool->hpo_dp_link_enc[i] == NULL)
DC_ERR("DC: failed to create HPO DP link encoder!\n");
pool->hpo_dp_link_enc_count++;
@@ -610,7 +610,7 @@ bool resource_are_vblanks_synchronizable(
{
uint32_t base60_refresh_rates[] = {10, 20, 5};
uint8_t i;
- uint8_t rr_count = ARRAY_SIZE(base60_refresh_rates);
+ uint8_t rr_count = (uint8_t)ARRAY_SIZE(base60_refresh_rates);
uint64_t frame_time_diff;
if (stream1->ctx->dc->config.vblank_alignment_dto_params &&
@@ -1801,7 +1801,7 @@ struct pipe_ctx *resource_find_free_secondary_pipe_legacy(
int preferred_pipe_idx = (pool->pipe_count - 1) - primary_pipe->pipe_idx;
if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
- secondary_pipe->pipe_idx = preferred_pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)preferred_pipe_idx;
}
}
@@ -1813,7 +1813,7 @@ struct pipe_ctx *resource_find_free_secondary_pipe_legacy(
for (i = pool->pipe_count - 1; i >= 0; i--) {
if (res_ctx->pipe_ctx[i].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[i];
- secondary_pipe->pipe_idx = i;
+ secondary_pipe->pipe_idx = (uint8_t)i;
break;
}
}
@@ -2624,8 +2624,8 @@ static int acquire_first_split_pipe(
split_pipe->plane_res.ipp = pool->ipps[i];
split_pipe->plane_res.dpp = pool->dpps[i];
split_pipe->stream_res.opp = pool->opps[i];
- split_pipe->plane_res.mpcc_inst = pool->dpps[i]->inst;
- split_pipe->pipe_idx = i;
+ split_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[i]->inst;
+ split_pipe->pipe_idx = (uint8_t)i;
split_pipe->stream = stream;
return i;
@@ -3804,7 +3804,7 @@ static int acquire_resource_from_hw_enabled_state(
pipe_ctx->stream_res.opp = pool->opps[id_src[i]];
if (pool->dpps[id_src[i]]) {
- pipe_ctx->plane_res.mpcc_inst = pool->dpps[id_src[i]]->inst;
+ pipe_ctx->plane_res.mpcc_inst = (uint8_t)pool->dpps[id_src[i]]->inst;
if (pool->mpc->funcs->read_mpcc_state) {
struct mpcc_state s = {0};
@@ -3823,7 +3823,7 @@ static int acquire_resource_from_hw_enabled_state(
pipe_ctx->stream_res.opp->mpc_tree_params.opp_id = s.opp_id;
}
}
- pipe_ctx->pipe_idx = id_src[i];
+ pipe_ctx->pipe_idx = (uint8_t)id_src[i];
if (id_src[i] >= pool->timing_generator_count) {
id_src[i] = pool->timing_generator_count - 1;
@@ -3939,7 +3939,7 @@ static bool acquire_otg_master_pipe_for_stream(
if (pipe_idx != FREE_PIPE_INDEX_NOT_FOUND) {
pipe_ctx = &new_ctx->res_ctx.pipe_ctx[pipe_idx];
memset(pipe_ctx, 0, sizeof(*pipe_ctx));
- pipe_ctx->pipe_idx = pipe_idx;
+ pipe_ctx->pipe_idx = (uint8_t)pipe_idx;
pipe_ctx->stream_res.tg = pool->timing_generators[pipe_idx];
pipe_ctx->plane_res.mi = pool->mis[pipe_idx];
pipe_ctx->plane_res.hubp = pool->hubps[pipe_idx];
@@ -3948,7 +3948,7 @@ static bool acquire_otg_master_pipe_for_stream(
pipe_ctx->plane_res.dpp = pool->dpps[pipe_idx];
pipe_ctx->stream_res.opp = pool->opps[pipe_idx];
if (pool->dpps[pipe_idx])
- pipe_ctx->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+ pipe_ctx->plane_res.mpcc_inst = (uint8_t)pool->dpps[pipe_idx]->inst;
if (pipe_idx >= pool->timing_generator_count && pool->timing_generator_count != 0) {
int tg_inst = pool->timing_generator_count - 1;
@@ -4497,7 +4497,7 @@ static void patch_gamut_packet_checksum(
for (i = 0; i <= gamut_packet->sb[1]; i++)
chk_sum += ptr[i];
- gamut_packet->sb[2] = (uint8_t) (0x100 - chk_sum);
+ gamut_packet->sb[2] = (uint8_t)(0x100 - chk_sum);
}
}
@@ -4562,7 +4562,7 @@ static void set_avi_info_frame(
/* Y0_Y1_Y2 : The pixel encoding */
/* H14b AVI InfoFrame has extension on Y-field from 2 bits to 3 bits */
- hdmi_info.bits.Y0_Y1_Y2 = pixel_encoding;
+ hdmi_info.bits.Y0_Y1_Y2 = (uint8_t)pixel_encoding;
/* A0 = 1 Active Format Information valid */
hdmi_info.bits.A0 = ACTIVE_FORMAT_VALID;
@@ -4692,7 +4692,7 @@ static void set_avi_info_frame(
}
}
/* If VIC >= 128, the Source shall use AVI InfoFrame Version 3*/
- hdmi_info.bits.VIC0_VIC7 = vic;
+ hdmi_info.bits.VIC0_VIC7 = (uint8_t)vic;
if (vic >= 128)
hdmi_info.bits.header.version = 3;
/* If (C1, C0)=(1, 1) and (EC2, EC1, EC0)=(1, 1, 1),
@@ -4710,7 +4710,7 @@ static void set_avi_info_frame(
hdmi_info.bits.FR0_FR3 = fr_ind & 0xF;
hdmi_info.bits.FR4 = (fr_ind >> 4) & 0x1;
- hdmi_info.bits.RID0_RID5 = rid;
+ hdmi_info.bits.RID0_RID5 = (uint8_t)rid;
}
/* pixel repetition
@@ -4723,10 +4723,10 @@ static void set_avi_info_frame(
* barBottom: Line Number of Start of Bottom Bar.
* barLeft: Pixel Number of End of Left Bar.
* barRight: Pixel Number of Start of Right Bar. */
- hdmi_info.bits.bar_top = stream->timing.v_border_top;
+ hdmi_info.bits.bar_top = (uint16_t)stream->timing.v_border_top;
hdmi_info.bits.bar_bottom = (stream->timing.v_total
- stream->timing.v_border_bottom + 1);
- hdmi_info.bits.bar_left = stream->timing.h_border_left;
+ hdmi_info.bits.bar_left = (uint16_t)stream->timing.h_border_left;
hdmi_info.bits.bar_right = (stream->timing.h_total
- stream->timing.h_border_right + 1);
@@ -4746,7 +4746,7 @@ static void set_avi_info_frame(
*check_sum += hdmi_info.packet_raw_data.sb[byte_index];
/* one byte complement */
- *check_sum = (uint8_t) (0x100 - *check_sum);
+ *check_sum = (uint8_t)(0x100 - *check_sum);
/* Store in hw_path_mode */
info_packet->hb0 = hdmi_info.packet_raw_data.hb0;
@@ -5564,13 +5564,13 @@ bool dc_resource_acquire_secondary_pipe_for_mpc_odm_legacy(
sec_pipe->next_odm_pipe = sec_next;
sec_pipe->prev_odm_pipe = sec_prev;
- sec_pipe->pipe_idx = pipe_idx;
+ sec_pipe->pipe_idx = (uint8_t)pipe_idx;
sec_pipe->plane_res.mi = pool->mis[pipe_idx];
sec_pipe->plane_res.hubp = pool->hubps[pipe_idx];
sec_pipe->plane_res.ipp = pool->ipps[pipe_idx];
sec_pipe->plane_res.xfm = pool->transforms[pipe_idx];
sec_pipe->plane_res.dpp = pool->dpps[pipe_idx];
- sec_pipe->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+ sec_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[pipe_idx]->inst;
sec_pipe->stream_res.dsc = NULL;
if (odm) {
if (!sec_pipe->top_pipe)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index 473fe959f5c7..cca3dece08d3 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -263,7 +263,7 @@ void program_cursor_attributes(
struct dc *dc,
struct dc_stream_state *stream)
{
- int i;
+ uint8_t i;
struct resource_context *res_ctx;
struct pipe_ctx *pipe_to_program = NULL;
bool enable_cursor_offload = dc_dmub_srv_is_cursor_offload_enabled(dc);
@@ -410,7 +410,7 @@ void program_cursor_position(
struct dc *dc,
struct dc_stream_state *stream)
{
- int i;
+ uint8_t i;
struct resource_context *res_ctx;
struct pipe_ctx *pipe_to_program = NULL;
bool enable_cursor_offload = dc_dmub_srv_is_cursor_offload_enabled(dc);
@@ -1032,14 +1032,18 @@ static int dc_stream_get_brightness_millinits_linear_interpolation (struct dc_st
int refresh_hz)
{
long long slope = 0;
+ long long y_intercept = 0;
+ long long brightness_millinits = 0;
+
if (stream->lumin_data.refresh_rate_hz[index2] != stream->lumin_data.refresh_rate_hz[index1]) {
slope = (stream->lumin_data.luminance_millinits[index2] - stream->lumin_data.luminance_millinits[index1]) /
(stream->lumin_data.refresh_rate_hz[index2] - stream->lumin_data.refresh_rate_hz[index1]);
}
- int y_intercept = stream->lumin_data.luminance_millinits[index2] - slope * stream->lumin_data.refresh_rate_hz[index2];
+ y_intercept = stream->lumin_data.luminance_millinits[index2] - slope * stream->lumin_data.refresh_rate_hz[index2];
+ brightness_millinits = y_intercept + (long long)refresh_hz * slope;
- return (y_intercept + refresh_hz * slope);
+ return (int)brightness_millinits;
}
/*
@@ -1051,14 +1055,18 @@ static int dc_stream_get_refresh_hz_linear_interpolation (struct dc_stream_state
int brightness_millinits)
{
long long slope = 1;
+ long long y_intercept = 0;
+ long long refresh_hz = 0;
+
if (stream->lumin_data.refresh_rate_hz[index2] != stream->lumin_data.refresh_rate_hz[index1]) {
slope = (stream->lumin_data.luminance_millinits[index2] - stream->lumin_data.luminance_millinits[index1]) /
(stream->lumin_data.refresh_rate_hz[index2] - stream->lumin_data.refresh_rate_hz[index1]);
}
- int y_intercept = stream->lumin_data.luminance_millinits[index2] - slope * stream->lumin_data.refresh_rate_hz[index2];
+ y_intercept = stream->lumin_data.luminance_millinits[index2] - slope * stream->lumin_data.refresh_rate_hz[index2];
+ refresh_hz = div64_s64((brightness_millinits - y_intercept), slope);
- return ((int)div64_s64((brightness_millinits - y_intercept), slope));
+ return (int)refresh_hz;
}
/*
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
index 56b21059a663..15167d0bd467 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_surface.c
@@ -75,7 +75,7 @@ uint8_t dc_plane_get_pipe_mask(struct dc_state *dc_state, const struct dc_plane
struct pipe_ctx *pipe_ctx = &dc_state->res_ctx.pipe_ctx[i];
if (pipe_ctx->plane_state == plane_state && pipe_ctx->plane_res.hubp)
- pipe_mask |= 1 << pipe_ctx->plane_res.hubp->inst;
+ pipe_mask |= (uint8_t)(1 << pipe_ctx->plane_res.hubp->inst);
}
return pipe_mask;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
index 9cb07110bdc7..317c69719313 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
@@ -336,7 +336,7 @@ bool dc_dmub_srv_optimized_init_done(struct dc_dmub_srv *dc_dmub_srv)
return false;
}
- return boot_status.bits.optimized_init_done;
+ return (bool)boot_status.bits.optimized_init_done;
}
bool dc_dmub_srv_notify_stream_mask(struct dc_dmub_srv *dc_dmub_srv,
@@ -346,7 +346,7 @@ bool dc_dmub_srv_notify_stream_mask(struct dc_dmub_srv *dc_dmub_srv,
return false;
return dc_wake_and_execute_gpint(dc_dmub_srv->ctx, DMUB_GPINT__IDLE_OPT_NOTIFY_STREAM_MASK,
- stream_mask, NULL, DM_DMUB_WAIT_TYPE_WAIT);
+ (uint16_t)stream_mask, NULL, DM_DMUB_WAIT_TYPE_WAIT);
}
bool dc_dmub_srv_is_restore_required(struct dc_dmub_srv *dc_dmub_srv)
@@ -368,7 +368,7 @@ bool dc_dmub_srv_is_restore_required(struct dc_dmub_srv *dc_dmub_srv)
return false;
}
- return boot_status.bits.restore_required;
+ return (bool)boot_status.bits.restore_required;
}
bool dc_dmub_srv_get_dmub_outbox0_msg(const struct dc *dc, struct dmcub_trace_buf_entry *entry)
@@ -415,7 +415,7 @@ void dc_dmub_srv_set_drr_manual_trigger_cmd(struct dc *dc, uint32_t tg_inst)
static uint8_t dc_dmub_srv_get_pipes_for_stream(struct dc *dc, struct dc_stream_state *stream)
{
uint8_t pipes = 0;
- int i = 0;
+ uint8_t i = 0;
for (i = 0; i < MAX_PIPES; i++) {
struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
@@ -433,15 +433,15 @@ static void dc_dmub_srv_populate_fams_pipe_info(struct dc *dc, struct dc_state *
int j;
int pipe_idx = 0;
- fams_pipe_data->pipe_index[pipe_idx++] = head_pipe->plane_res.hubp->inst;
+ fams_pipe_data->pipe_index[pipe_idx++] = (uint8_t)head_pipe->plane_res.hubp->inst;
for (j = 0; j < dc->res_pool->pipe_count; j++) {
struct pipe_ctx *split_pipe = &context->res_ctx.pipe_ctx[j];
if (split_pipe->stream == head_pipe->stream && (split_pipe->top_pipe || split_pipe->prev_odm_pipe)) {
- fams_pipe_data->pipe_index[pipe_idx++] = split_pipe->plane_res.hubp->inst;
+ fams_pipe_data->pipe_index[pipe_idx++] = (uint8_t)split_pipe->plane_res.hubp->inst;
}
}
- fams_pipe_data->pipe_count = pipe_idx;
+ fams_pipe_data->pipe_count = (uint8_t)pipe_idx;
}
bool dc_dmub_srv_p_state_delegate(struct dc *dc, bool should_manage_pstate, struct dc_state *context)
@@ -456,7 +456,7 @@ bool dc_dmub_srv_p_state_delegate(struct dc *dc, bool should_manage_pstate, stru
if (dc == NULL)
return false;
- visual_confirm_enabled = dc->debug.visual_confirm == VISUAL_CONFIRM_FAMS;
+ visual_confirm_enabled = (uint8_t)(dc->debug.visual_confirm == VISUAL_CONFIRM_FAMS);
// Format command.
cmd.fw_assisted_mclk_switch.header.type = DMUB_CMD__FW_ASSISTED_MCLK_SWITCH;
@@ -477,7 +477,8 @@ bool dc_dmub_srv_p_state_delegate(struct dc *dc, bool should_manage_pstate, stru
*/
stream_status = dc_state_get_stream_status(context, pipe->stream);
if (stream_status && !stream_status->fpo_in_use) {
- cmd.fw_assisted_mclk_switch.config_data.vactive_stretch_margin_us = dc->debug.fpo_vactive_margin_us;
+ cmd.fw_assisted_mclk_switch.config_data.vactive_stretch_margin_us =
+ (uint16_t)dc->debug.fpo_vactive_margin_us;
break;
}
}
@@ -492,11 +493,13 @@ bool dc_dmub_srv_p_state_delegate(struct dc *dc, bool should_manage_pstate, stru
stream_status = dc_state_get_stream_status(context, pipe->stream);
if (stream_status && stream_status->fpo_in_use) {
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
- uint8_t min_refresh_in_hz = (pipe->stream->timing.min_refresh_in_uhz + 999999) / 1000000;
+ uint8_t min_refresh_in_hz;
+
+ min_refresh_in_hz = (uint8_t)((pipe->stream->timing.min_refresh_in_uhz + 999999) / 1000000);
config_data->pipe_data[k].pix_clk_100hz = pipe->stream->timing.pix_clk_100hz;
config_data->pipe_data[k].min_refresh_in_hz = min_refresh_in_hz;
- config_data->pipe_data[k].max_ramp_step = ramp_up_num_steps;
+ config_data->pipe_data[k].max_ramp_step = (uint8_t)ramp_up_num_steps;
config_data->pipe_data[k].pipes = dc_dmub_srv_get_pipes_for_stream(dc, pipe->stream);
dc_dmub_srv_populate_fams_pipe_info(dc, context, pipe, &config_data->pipe_data[k]);
k++;
@@ -551,7 +554,7 @@ void dc_dmub_srv_get_visual_confirm_color_cmd(struct dc *dc, struct pipe_ctx *pi
cmd.visual_confirm_color.header.sub_type = 0;
cmd.visual_confirm_color.header.ret_status = 1;
cmd.visual_confirm_color.header.payload_bytes = sizeof(struct dmub_cmd_visual_confirm_color_data);
- cmd.visual_confirm_color.visual_confirm_color_data.visual_confirm_color.panel_inst = panel_inst;
+ cmd.visual_confirm_color.visual_confirm_color_data.visual_confirm_color.panel_inst = (uint16_t)panel_inst;
// If command was processed, copy feature caps to dmub srv
if (dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY) &&
@@ -589,17 +592,17 @@ static void populate_subvp_cmd_drr_info(struct dc *dc,
struct dc_crtc_timing *main_timing = &subvp_pipe->stream->timing;
struct dc_crtc_timing *phantom_timing;
struct dc_crtc_timing *drr_timing = &vblank_pipe->stream->timing;
- uint16_t drr_frame_us = 0;
- uint16_t min_drr_supported_us = 0;
- uint16_t max_drr_supported_us = 0;
- uint16_t max_drr_vblank_us = 0;
- uint16_t max_drr_mallregion_us = 0;
- uint16_t mall_region_us = 0;
- uint16_t prefetch_us = 0;
- uint16_t subvp_active_us = 0;
- uint16_t drr_active_us = 0;
- uint16_t min_vtotal_supported = 0;
- uint16_t max_vtotal_supported = 0;
+ uint64_t drr_frame_us = 0;
+ uint64_t min_drr_supported_us = 0;
+ uint64_t max_drr_supported_us = 0;
+ uint64_t max_drr_vblank_us = 0;
+ uint64_t max_drr_mallregion_us = 0;
+ uint64_t mall_region_us = 0;
+ uint64_t prefetch_us = 0;
+ uint64_t subvp_active_us = 0;
+ uint64_t drr_active_us = 0;
+ uint64_t min_vtotal_supported = 0;
+ uint64_t max_vtotal_supported = 0;
if (!phantom_stream)
return;
@@ -639,9 +642,10 @@ static void populate_subvp_cmd_drr_info(struct dc *dc,
*/
max_vtotal_supported = max_vtotal_supported - dc->caps.subvp_drr_max_vblank_margin_us;
- pipe_data->pipe_config.vblank_data.drr_info.min_vtotal_supported = min_vtotal_supported;
- pipe_data->pipe_config.vblank_data.drr_info.max_vtotal_supported = max_vtotal_supported;
- pipe_data->pipe_config.vblank_data.drr_info.drr_vblank_start_margin = dc->caps.subvp_drr_vblank_start_margin_us;
+ pipe_data->pipe_config.vblank_data.drr_info.min_vtotal_supported = (uint16_t)min_vtotal_supported;
+ pipe_data->pipe_config.vblank_data.drr_info.max_vtotal_supported = (uint16_t)max_vtotal_supported;
+ pipe_data->pipe_config.vblank_data.drr_info.drr_vblank_start_margin =
+ (uint16_t)dc->caps.subvp_drr_vblank_start_margin_us;
}
/**
@@ -686,12 +690,12 @@ static void populate_subvp_cmd_vblank_pipe_info(struct dc *dc,
pipe_data->mode = VBLANK;
pipe_data->pipe_config.vblank_data.pix_clk_100hz = vblank_pipe->stream->timing.pix_clk_100hz;
- pipe_data->pipe_config.vblank_data.vblank_start = vblank_pipe->stream->timing.v_total -
- vblank_pipe->stream->timing.v_front_porch;
- pipe_data->pipe_config.vblank_data.vtotal = vblank_pipe->stream->timing.v_total;
- pipe_data->pipe_config.vblank_data.htotal = vblank_pipe->stream->timing.h_total;
+ pipe_data->pipe_config.vblank_data.vblank_start = (uint16_t)(vblank_pipe->stream->timing.v_total -
+ vblank_pipe->stream->timing.v_front_porch);
+ pipe_data->pipe_config.vblank_data.vtotal = (uint16_t)vblank_pipe->stream->timing.v_total;
+ pipe_data->pipe_config.vblank_data.htotal = (uint16_t)vblank_pipe->stream->timing.h_total;
pipe_data->pipe_config.vblank_data.vblank_pipe_index = vblank_pipe->pipe_idx;
- pipe_data->pipe_config.vblank_data.vstartup_start = vblank_pipe->pipe_dlg_param.vstartup_start;
+ pipe_data->pipe_config.vblank_data.vstartup_start = (uint16_t)vblank_pipe->pipe_dlg_param.vstartup_start;
pipe_data->pipe_config.vblank_data.vblank_end =
vblank_pipe->stream->timing.v_total - vblank_pipe->stream->timing.v_front_porch - vblank_pipe->stream->timing.v_addressable;
@@ -739,10 +743,10 @@ static void update_subvp_prefetch_end_to_mall_start(struct dc *dc,
phantom_timing0 = &phantom_stream0->timing;
phantom_timing1 = &phantom_stream1->timing;
- subvp0_prefetch_us = div64_u64(((uint64_t)(phantom_timing0->v_total - phantom_timing0->v_front_porch) *
+ subvp0_prefetch_us = (uint32_t)div64_u64(((uint64_t)(phantom_timing0->v_total - phantom_timing0->v_front_porch) *
(uint64_t)phantom_timing0->h_total * 1000000),
(((uint64_t)phantom_timing0->pix_clk_100hz * 100) + dc->caps.subvp_prefetch_end_to_mall_start_us));
- subvp1_prefetch_us = div64_u64(((uint64_t)(phantom_timing1->v_total - phantom_timing1->v_front_porch) *
+ subvp1_prefetch_us = (uint32_t)div64_u64(((uint64_t)(phantom_timing1->v_total - phantom_timing1->v_front_porch) *
(uint64_t)phantom_timing1->h_total * 1000000),
(((uint64_t)phantom_timing1->pix_clk_100hz * 100) + dc->caps.subvp_prefetch_end_to_mall_start_us));
@@ -751,8 +755,8 @@ static void update_subvp_prefetch_end_to_mall_start(struct dc *dc,
if (subvp0_prefetch_us > subvp1_prefetch_us) {
pipe_data = &cmd->fw_assisted_mclk_switch_v2.config_data.pipe_data[1];
prefetch_delta_us = subvp0_prefetch_us - subvp1_prefetch_us;
- pipe_data->pipe_config.subvp_data.prefetch_to_mall_start_lines =
- div64_u64(((uint64_t)(dc->caps.subvp_prefetch_end_to_mall_start_us + prefetch_delta_us) *
+pipe_data->pipe_config.subvp_data.prefetch_to_mall_start_lines =
+ (uint16_t)div64_u64(((uint64_t)(dc->caps.subvp_prefetch_end_to_mall_start_us + prefetch_delta_us) *
((uint64_t)phantom_timing1->pix_clk_100hz * 100) + ((uint64_t)phantom_timing1->h_total * 1000000 - 1)),
((uint64_t)phantom_timing1->h_total * 1000000));
@@ -760,7 +764,7 @@ static void update_subvp_prefetch_end_to_mall_start(struct dc *dc,
pipe_data = &cmd->fw_assisted_mclk_switch_v2.config_data.pipe_data[0];
prefetch_delta_us = subvp1_prefetch_us - subvp0_prefetch_us;
pipe_data->pipe_config.subvp_data.prefetch_to_mall_start_lines =
- div64_u64(((uint64_t)(dc->caps.subvp_prefetch_end_to_mall_start_us + prefetch_delta_us) *
+ (uint16_t)div64_u64(((uint64_t)(dc->caps.subvp_prefetch_end_to_mall_start_us + prefetch_delta_us) *
((uint64_t)phantom_timing0->pix_clk_100hz * 100) + ((uint64_t)phantom_timing0->h_total * 1000000 - 1)),
((uint64_t)phantom_timing0->h_total * 1000000));
}
@@ -800,14 +804,14 @@ static void populate_subvp_cmd_pipe_info(struct dc *dc,
pipe_data->mode = SUBVP;
pipe_data->pipe_config.subvp_data.pix_clk_100hz = subvp_pipe->stream->timing.pix_clk_100hz;
- pipe_data->pipe_config.subvp_data.htotal = subvp_pipe->stream->timing.h_total;
- pipe_data->pipe_config.subvp_data.vtotal = subvp_pipe->stream->timing.v_total;
+ pipe_data->pipe_config.subvp_data.htotal = (uint16_t)subvp_pipe->stream->timing.h_total;
+ pipe_data->pipe_config.subvp_data.vtotal = (uint16_t)subvp_pipe->stream->timing.v_total;
pipe_data->pipe_config.subvp_data.main_vblank_start =
- main_timing->v_total - main_timing->v_front_porch;
+ (uint16_t)(main_timing->v_total - main_timing->v_front_porch);
pipe_data->pipe_config.subvp_data.main_vblank_end =
- main_timing->v_total - main_timing->v_front_porch - main_timing->v_addressable;
- pipe_data->pipe_config.subvp_data.mall_region_lines = phantom_timing->v_addressable;
- pipe_data->pipe_config.subvp_data.main_pipe_index = subvp_pipe->stream_res.tg->inst;
+ (uint16_t)(main_timing->v_total - main_timing->v_front_porch - main_timing->v_addressable);
+ pipe_data->pipe_config.subvp_data.mall_region_lines = (uint16_t)phantom_timing->v_addressable;
+ pipe_data->pipe_config.subvp_data.main_pipe_index = (uint8_t)subvp_pipe->stream_res.tg->inst;
pipe_data->pipe_config.subvp_data.is_drr = subvp_pipe->stream->ignore_msa_timing_param &&
(subvp_pipe->stream->allow_freesync || subvp_pipe->stream->vrr_active_variable || subvp_pipe->stream->vrr_active_fixed);
@@ -822,8 +826,8 @@ static void populate_subvp_cmd_pipe_info(struct dc *dc,
reduce_fraction(subvp_pipe->plane_state->src_rect.height, subvp_pipe->plane_state->dst_rect.height,
&out_num_plane, &out_den_plane);
reduce_fraction(out_num_stream * out_num_plane, out_den_stream * out_den_plane, &out_num, &out_den);
- pipe_data->pipe_config.subvp_data.scale_factor_numerator = out_num;
- pipe_data->pipe_config.subvp_data.scale_factor_denominator = out_den;
+ pipe_data->pipe_config.subvp_data.scale_factor_numerator = (uint8_t)out_num;
+ pipe_data->pipe_config.subvp_data.scale_factor_denominator = (uint8_t)out_den;
// Prefetch lines is equal to VACTIVE + BP + VSYNC
pipe_data->pipe_config.subvp_data.prefetch_lines =
@@ -831,16 +835,16 @@ static void populate_subvp_cmd_pipe_info(struct dc *dc,
// Round up
pipe_data->pipe_config.subvp_data.prefetch_to_mall_start_lines =
- div64_u64(((uint64_t)dc->caps.subvp_prefetch_end_to_mall_start_us * ((uint64_t)phantom_timing->pix_clk_100hz * 100) +
+ (uint16_t)div64_u64(((uint64_t)dc->caps.subvp_prefetch_end_to_mall_start_us * ((uint64_t)phantom_timing->pix_clk_100hz * 100) +
((uint64_t)phantom_timing->h_total * 1000000 - 1)), ((uint64_t)phantom_timing->h_total * 1000000));
pipe_data->pipe_config.subvp_data.processing_delay_lines =
- div64_u64(((uint64_t)(dc->caps.subvp_fw_processing_delay_us) * ((uint64_t)phantom_timing->pix_clk_100hz * 100) +
+ (uint16_t)div64_u64(((uint64_t)(dc->caps.subvp_fw_processing_delay_us) * ((uint64_t)phantom_timing->pix_clk_100hz * 100) +
((uint64_t)phantom_timing->h_total * 1000000 - 1)), ((uint64_t)phantom_timing->h_total * 1000000));
if (subvp_pipe->bottom_pipe) {
- pipe_data->pipe_config.subvp_data.main_split_pipe_index = subvp_pipe->bottom_pipe->pipe_idx;
+ pipe_data->pipe_config.subvp_data.main_split_pipe_index = (uint8_t)subvp_pipe->bottom_pipe->pipe_idx;
} else if (subvp_pipe->next_odm_pipe) {
- pipe_data->pipe_config.subvp_data.main_split_pipe_index = subvp_pipe->next_odm_pipe->pipe_idx;
+ pipe_data->pipe_config.subvp_data.main_split_pipe_index = (uint8_t)subvp_pipe->next_odm_pipe->pipe_idx;
} else {
pipe_data->pipe_config.subvp_data.main_split_pipe_index = 0xF;
}
@@ -851,11 +855,11 @@ static void populate_subvp_cmd_pipe_info(struct dc *dc,
if (resource_is_pipe_type(phantom_pipe, OTG_MASTER) &&
phantom_pipe->stream == dc_state_get_paired_subvp_stream(context, subvp_pipe->stream)) {
- pipe_data->pipe_config.subvp_data.phantom_pipe_index = phantom_pipe->stream_res.tg->inst;
+ pipe_data->pipe_config.subvp_data.phantom_pipe_index = (uint8_t)phantom_pipe->stream_res.tg->inst;
if (phantom_pipe->bottom_pipe) {
- pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = phantom_pipe->bottom_pipe->plane_res.hubp->inst;
+ pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = (uint8_t)phantom_pipe->bottom_pipe->plane_res.hubp->inst;
} else if (phantom_pipe->next_odm_pipe) {
- pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = phantom_pipe->next_odm_pipe->plane_res.hubp->inst;
+ pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = (uint8_t)phantom_pipe->next_odm_pipe->plane_res.hubp->inst;
} else {
pipe_data->pipe_config.subvp_data.phantom_split_pipe_index = 0xF;
}
@@ -933,15 +937,15 @@ void dc_dmub_setup_subvp_dmub_command(struct dc *dc,
if (subvp_count == 2) {
update_subvp_prefetch_end_to_mall_start(dc, context, &cmd, subvp_pipes);
}
- cmd.fw_assisted_mclk_switch_v2.config_data.pstate_allow_width_us = dc->caps.subvp_pstate_allow_width_us;
- cmd.fw_assisted_mclk_switch_v2.config_data.vertical_int_margin_us = dc->caps.subvp_vertical_int_margin_us;
+ cmd.fw_assisted_mclk_switch_v2.config_data.pstate_allow_width_us = (uint8_t)dc->caps.subvp_pstate_allow_width_us;
+ cmd.fw_assisted_mclk_switch_v2.config_data.vertical_int_margin_us = (uint8_t)dc->caps.subvp_vertical_int_margin_us;
// Store the original watermark value for this SubVP config so we can lower it when the
// MCLK switch starts
wm_val_refclk = context->bw_ctx.bw.dcn.watermarks.a.cstate_pstate.pstate_change_ns *
(dc->res_pool->ref_clocks.dchub_ref_clock_inKhz / 1000) / 1000;
- cmd.fw_assisted_mclk_switch_v2.config_data.watermark_a_cache = wm_val_refclk < 0xFFFF ? wm_val_refclk : 0xFFFF;
+ cmd.fw_assisted_mclk_switch_v2.config_data.watermark_a_cache = (uint16_t)(wm_val_refclk < 0xFFFF ? wm_val_refclk : 0xFFFF);
}
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
@@ -1060,10 +1064,10 @@ static void dc_build_cursor_update_payload0(
payload->cursor_rect.width = hubp->cur_rect.w;
payload->cursor_rect.height = hubp->cur_rect.h;
- payload->enable = hubp->pos.cur_ctl.bits.cur_enable;
+ payload->enable = (uint8_t)hubp->pos.cur_ctl.bits.cur_enable;
payload->pipe_idx = p_idx;
- payload->panel_inst = panel_inst;
- payload->otg_inst = pipe_ctx->stream_res.tg->inst;
+ payload->panel_inst = (uint8_t)panel_inst;
+ payload->otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
}
static void dc_build_cursor_position_update_payload0(
@@ -1645,7 +1649,7 @@ bool dc_dmub_srv_should_detect(struct dc_dmub_srv *dc_dmub_srv)
if (dc_dmub_srv->dmub->shared_state &&
dc_dmub_srv->dmub->meta_info.feature_bits.bits.shared_state_link_detection) {
ips_fw = &dc_dmub_srv->dmub->shared_state[DMUB_SHARED_SHARE_FEATURE__IPS_FW].data.ips_fw;
- return ips_fw->signals.bits.detection_required;
+ return (bool)ips_fw->signals.bits.detection_required;
}
/* Detection may require reading scratch 0 - exit out of idle prior to the read. */
@@ -1940,7 +1944,7 @@ void dc_dmub_srv_fams2_drr_update(struct dc *dc,
cmd.fams2_drr_update.header.type = DMUB_CMD__FW_ASSISTED_MCLK_SWITCH;
cmd.fams2_drr_update.header.sub_type = DMUB_CMD__FAMS2_DRR_UPDATE;
- cmd.fams2_drr_update.dmub_optc_state_req.tg_inst = tg_inst;
+ cmd.fams2_drr_update.dmub_optc_state_req.tg_inst = (uint8_t)tg_inst;
cmd.fams2_drr_update.dmub_optc_state_req.v_total_max = vtotal_max;
cmd.fams2_drr_update.dmub_optc_state_req.v_total_min = vtotal_min;
cmd.fams2_drr_update.dmub_optc_state_req.v_total_mid = vtotal_mid;
@@ -1991,10 +1995,10 @@ void dc_dmub_srv_fams2_passthrough_flip(
cmds[num_cmds].fams2_flip.header.multi_cmd_pending = 1;
/* set topology info */
- cmds[num_cmds].fams2_flip.flip_info.pipe_mask = dc_plane_get_pipe_mask(state, plane_state);
- if (stream_status)
- cmds[num_cmds].fams2_flip.flip_info.otg_inst = stream_status->primary_otg_inst;
-
+ cmds[num_cmds].fams2_flip.flip_info.pipe_mask = (uint8_t)dc_plane_get_pipe_mask(state, plane_state);
+ if (stream_status) {
+ cmds[num_cmds].fams2_flip.flip_info.otg_inst = (uint8_t)stream_status->primary_otg_inst;
+ }
cmds[num_cmds].fams2_flip.flip_info.config.bits.is_immediate = plane_state->flip_immediate;
/* build address info for command */
diff --git a/drivers/gpu/drm/amd/display/dc/dc_fused_io.c b/drivers/gpu/drm/amd/display/dc/dc_fused_io.c
index fee69642fb93..664cb4abf623 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_fused_io.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_fused_io.c
@@ -23,11 +23,11 @@ static bool op_i2c_convert(
req->type = type;
loc->is_aux = false;
- loc->ddc_line = ddc_line;
+ loc->ddc_line = (uint8_t)ddc_line;
loc->over_aux = over_aux;
loc->address = op->address;
loc->offset = op->offset;
- loc->length = op->size;
+ loc->length = (uint8_t)op->size;
memcpy(req->buffer, op->data, op->size);
return true;
@@ -84,7 +84,7 @@ static bool atomic_write_poll_read(
timeout_us += timeout_per_aux_transaction_us * (io->request.u.aux.length / 16);
}
- if (!dm_helpers_execute_fused_io(link->ctx, link, commands, count, timeout_us))
+ if (!dm_helpers_execute_fused_io(link->ctx, link, commands, count, (uint32_t)timeout_us))
return false;
return commands[0].fused_io.request.status == FUSED_REQUEST_STATUS_SUCCESS;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_helper.c b/drivers/gpu/drm/amd/display/dc/dc_helper.c
index 77299767096f..e221384f7611 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_helper.c
@@ -122,7 +122,7 @@ static void set_reg_field_values(struct dc_reg_value_masks *field_value_mask,
field_value = va_arg(ap, uint32_t);
set_reg_field_value_masks(field_value_mask,
- field_value, mask, shift);
+ field_value, mask, (uint8_t)shift);
i++;
}
}
@@ -459,7 +459,7 @@ void generic_reg_wait(const struct dc_context *ctx,
reg_val = dm_read_reg(ctx, addr);
- field_value = get_reg_field_value_ex(reg_val, mask, shift);
+ field_value = get_reg_field_value_ex(reg_val, mask, (uint8_t)shift);
if (field_value == condition_value) {
if (i * delay_between_poll_us > 1000)
@@ -525,7 +525,7 @@ uint32_t generic_indirect_reg_get(const struct dc_context *ctx,
mask = va_arg(ap, uint32_t);
field_value = va_arg(ap, uint32_t *);
- *field_value = get_reg_field_value_ex(value, mask, shift);
+ *field_value = get_reg_field_value_ex(value, mask, (uint8_t)shift);
i++;
}
@@ -554,7 +554,7 @@ uint32_t generic_indirect_reg_update_ex(const struct dc_context *ctx,
mask = va_arg(ap, uint32_t);
field_value = va_arg(ap, uint32_t);
- reg_val = set_reg_field_value_ex(reg_val, field_value, mask, shift);
+ reg_val = set_reg_field_value_ex(reg_val, field_value, mask, (uint8_t)shift);
i++;
}
@@ -584,7 +584,7 @@ uint32_t generic_indirect_reg_update_ex_sync(const struct dc_context *ctx,
mask = va_arg(ap, uint32_t);
field_value = va_arg(ap, uint32_t);
- reg_val = set_reg_field_value_ex(reg_val, field_value, mask, shift);
+ reg_val = set_reg_field_value_ex(reg_val, field_value, mask, (uint8_t)shift);
i++;
}
@@ -615,7 +615,7 @@ uint32_t generic_indirect_reg_get_sync(const struct dc_context *ctx,
mask = va_arg(ap, uint32_t);
field_value = va_arg(ap, uint32_t *);
- *field_value = get_reg_field_value_ex(value, mask, shift);
+ *field_value = get_reg_field_value_ex(value, mask, (uint8_t)shift);
i++;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn31/dcn31_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn31/dcn31_dccg.c
index 42cb8b29dd47..1f5a4a8bf691 100644
--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn31/dcn31_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn31/dcn31_dccg.c
@@ -576,7 +576,7 @@ void dccg31_set_dtbclk_dto(
// phase / modulo = dtbclk / dtbclk ref
modulo = params->ref_dtbclk_khz * 1000;
- phase = div_u64((((unsigned long long)modulo * req_dtbclk_khz) + params->ref_dtbclk_khz - 1),
+ phase = (uint32_t)div_u64((((unsigned long long)modulo * req_dtbclk_khz) + params->ref_dtbclk_khz - 1),
params->ref_dtbclk_khz);
REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
@@ -620,7 +620,7 @@ void dccg31_set_audio_dtbclk_dto(
// phase / modulo = dtbclk / dtbclk ref
modulo = params->ref_dtbclk_khz * 1000;
- phase = div_u64((((unsigned long long)modulo * params->req_audio_dtbclk_khz) + params->ref_dtbclk_khz - 1),
+ phase = (uint32_t)div_u64((((unsigned long long)modulo * params->req_audio_dtbclk_khz) + params->ref_dtbclk_khz - 1),
params->ref_dtbclk_khz);
diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn401/dcn401_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn401/dcn401_dccg.c
index 72aaf057e775..d49689baa7ca 100644
--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn401/dcn401_dccg.c
+++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn401/dcn401_dccg.c
@@ -610,11 +610,13 @@ void dccg401_set_dp_dto(
* int = target_pix_rate / reference_clock
* phase = target_pix_rate - int * reference_clock,
* modulo = reference_clock */
- dto_integer = div_u64(params->pixclk_hz, dto_modulo_hz);
+
+ /* dto_modulo_hz = refclk (~100 MHz), well within uint32_t range */
+ dto_integer = div_u64(params->pixclk_hz, (uint32_t)dto_modulo_hz);
dto_phase_hz = params->pixclk_hz - dto_integer * dto_modulo_hz;
- if (dto_phase_hz <= 0 && dto_integer <= 0) {
- /* negative pixel rate should never happen */
+ if (dto_phase_hz == 0 && dto_integer == 0) {
+ /* zero pixel rate should never happen */
BREAK_TO_DEBUGGER();
return;
}
@@ -656,25 +658,25 @@ void dccg401_set_dp_dto(
dccg401_set_dtbclk_p_src(dccg, params->clk_src, params->otg_inst);
- REG_WRITE(DP_DTO_PHASE[params->otg_inst], dto_phase_hz);
- REG_WRITE(DP_DTO_MODULO[params->otg_inst], dto_modulo_hz);
+ REG_WRITE(DP_DTO_PHASE[params->otg_inst], (uint32_t)dto_phase_hz);
+ REG_WRITE(DP_DTO_MODULO[params->otg_inst], (uint32_t)dto_modulo_hz);
switch (params->otg_inst) {
case 0:
REG_UPDATE(OTG_PIXEL_RATE_DIV,
- DPDTO0_INT, dto_integer);
+ DPDTO0_INT, (uint32_t)dto_integer);
break;
case 1:
REG_UPDATE(OTG_PIXEL_RATE_DIV,
- DPDTO1_INT, dto_integer);
+ DPDTO1_INT, (uint32_t)dto_integer);
break;
case 2:
REG_UPDATE(OTG_PIXEL_RATE_DIV,
- DPDTO2_INT, dto_integer);
+ DPDTO2_INT, (uint32_t)dto_integer);
break;
case 3:
REG_UPDATE(OTG_PIXEL_RATE_DIV,
- DPDTO3_INT, dto_integer);
+ DPDTO3_INT, (uint32_t)dto_integer);
break;
default:
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
index 673bb87d2c17..eee58f946fae 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_aux.c
@@ -321,7 +321,7 @@ static int read_channel_reply(struct dce_aux *engine, uint32_t size,
uint32_t aux_sw_data_val;
REG_GET(AUX_SW_DATA, AUX_SW_DATA, &aux_sw_data_val);
- buffer[i] = aux_sw_data_val;
+ buffer[i] = (uint8_t)aux_sw_data_val;
++i;
}
@@ -375,7 +375,7 @@ static enum aux_return_code_type get_channel_status(
(value & AUX_SW_STATUS__AUX_SW_RX_RECV_INVALID_L_MASK))
return AUX_RET_ERROR_INVALID_REPLY;
- *returned_bytes = get_reg_field_value(value,
+ *returned_bytes = (uint8_t)get_reg_field_value(value,
AUX_SW_STATUS,
AUX_SW_REPLY_BYTE_COUNT);
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
index 321a012268b0..b97b4cd23eaa 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c
@@ -162,11 +162,9 @@ static bool calculate_fb_and_fractional_fb_divider(
feedback_divider *= (uint64_t)
(calc_pll_cs->fract_fb_divider_precision_factor);
- *feedback_divider_param =
- div_u64_rem(
- feedback_divider,
- calc_pll_cs->fract_fb_divider_factor,
- fract_feedback_divider_param);
+ *feedback_divider_param = (uint32_t)div_u64_rem(
+ feedback_divider, calc_pll_cs->fract_fb_divider_factor,
+ fract_feedback_divider_param);
if (*feedback_divider_param != 0)
return true;
@@ -240,7 +238,7 @@ static bool calc_fb_divider_checking_tolerance(
pll_settings->calculated_pix_clk_100hz =
actual_calculated_clock_100hz;
pll_settings->vco_freq =
- div_u64((u64)actual_calculated_clock_100hz * post_divider, 10);
+ (uint32_t)div_u64((u64)actual_calculated_clock_100hz * post_divider, 10);
return true;
}
return false;
@@ -440,8 +438,7 @@ static bool pll_adjust_pix_clk(
bp_adjust_pixel_clock_params.
encoder_object_id = pix_clk_params->encoder_object_id;
bp_adjust_pixel_clock_params.signal_type = pix_clk_params->signal_type;
- bp_adjust_pixel_clock_params.
- ss_enable = pix_clk_params->flags.ENABLE_SS;
+ bp_adjust_pixel_clock_params.ss_enable = pix_clk_params->flags.ENABLE_SS != 0;
bp_result = clk_src->bios->funcs->adjust_pixel_clock(
clk_src->bios, &bp_adjust_pixel_clock_params);
if (bp_result == BP_RESULT_OK) {
@@ -958,7 +955,7 @@ static bool dce112_program_pix_clk(
dce112_program_pixel_clk_resync(clk_src,
pix_clk_params->signal_type,
pix_clk_params->color_depth,
- pix_clk_params->flags.SUPPORT_YCBCR420);
+ pix_clk_params->flags.SUPPORT_YCBCR420 != 0);
return true;
}
@@ -1059,7 +1056,7 @@ static bool dcn31_program_pix_clk(
dce112_program_pixel_clk_resync(clk_src,
pix_clk_params->signal_type,
pix_clk_params->color_depth,
- pix_clk_params->flags.SUPPORT_YCBCR420);
+ pix_clk_params->flags.SUPPORT_YCBCR420 != 0);
}
return true;
@@ -1162,7 +1159,7 @@ static bool dcn401_program_pix_clk(
dce112_program_pixel_clk_resync(clk_src,
pix_clk_params->signal_type,
pix_clk_params->color_depth,
- pix_clk_params->flags.SUPPORT_YCBCR420);
+ pix_clk_params->flags.SUPPORT_YCBCR420 != 0);
}
return true;
@@ -1211,9 +1208,8 @@ static bool get_pixel_clk_frequency_100hz(
*/
modulo_hz = REG_READ(MODULO[inst]);
if (modulo_hz)
- *pixel_clk_khz = div_u64((uint64_t)clock_hz*
- dp_dto_ref_khz*10,
- modulo_hz);
+ *pixel_clk_khz = (unsigned int)div_u64((uint64_t)clock_hz *
+ dp_dto_ref_khz * 10, modulo_hz);
else
*pixel_clk_khz = 0;
} else {
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
index fe239a96121e..05892ab4529f 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_hw.c
@@ -120,7 +120,7 @@ static void process_channel_reply(
uint32_t i2c_data;
REG_GET(DC_I2C_DATA, DC_I2C_DATA, &i2c_data);
- *buffer++ = i2c_data;
+ *buffer++ = (uint8_t)i2c_data;
--length;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
index de31fb1b6819..31a9181c6a2b 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_panel_cntl.c
@@ -209,7 +209,7 @@ static void dce_driver_set_backlight(struct panel_cntl *panel_cntl,
if (pwm_period_bitcnt == 0)
bit_count = 16;
else
- bit_count = pwm_period_bitcnt;
+ bit_count = (uint8_t)pwm_period_bitcnt;
/* e.g. maskedPwmPeriod = 0x24 when bitCount is 6 */
masked_pwm_period = masked_pwm_period & ((1 << bit_count) - 1);
@@ -224,7 +224,7 @@ static void dce_driver_set_backlight(struct panel_cntl *panel_cntl,
* components shift by bitCount then mask 16 bits and add rounding bit
* from MSB of fraction e.g. 0x86F7 = ((0x21BDC0 >> 6) & 0xFFF) + 0
*/
- backlight_16bit = active_duty_cycle >> bit_count;
+ backlight_16bit = (uint32_t)(active_duty_cycle >> bit_count);
backlight_16bit &= 0xFFFF;
backlight_16bit += (active_duty_cycle >> (bit_count - 1)) & 0x1;
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
index c1448ae47366..d178dcc4306d 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
@@ -802,7 +802,7 @@ static void program_bit_depth_reduction(
ASSERT(depth <= COLOR_DEPTH_121212); /* Invalid clamp bit depth */
- spatial_dither_enable = bit_depth_params->flags.SPATIAL_DITHER_ENABLED;
+ spatial_dither_enable = bit_depth_params->flags.SPATIAL_DITHER_ENABLED != 0;
/* Default to 12 bit truncation without rounding */
trunc_round_depth = DCP_OUT_TRUNC_ROUND_DEPTH_12BIT;
trunc_mode = DCP_OUT_TRUNC_ROUND_MODE_TRUNCATE;
@@ -835,9 +835,9 @@ static void program_bit_depth_reduction(
spatial_dither_enable,
DCP_SPATIAL_DITHER_MODE_A_AA_A,
DCP_SPATIAL_DITHER_DEPTH_30BPP,
- bit_depth_params->flags.FRAME_RANDOM,
- bit_depth_params->flags.RGB_RANDOM,
- bit_depth_params->flags.HIGHPASS_RANDOM);
+ bit_depth_params->flags.FRAME_RANDOM != 0,
+ bit_depth_params->flags.RGB_RANDOM != 0,
+ bit_depth_params->flags.HIGHPASS_RANDOM != 0);
}
#if defined(CONFIG_DRM_AMD_DC_SI)
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
index 806b5709c9e7..a3cd04fc44f7 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c
@@ -176,7 +176,7 @@ void dmub_abm_init_config(struct abm *abm,
cmd.abm_init_config.header.type = DMUB_CMD__ABM;
cmd.abm_init_config.header.sub_type = DMUB_CMD__ABM_INIT_CONFIG;
cmd.abm_init_config.abm_init_config_data.src.quad_part = dc->dmub_srv->dmub->scratch_mem_fb.gpu_addr;
- cmd.abm_init_config.abm_init_config_data.bytes = bytes;
+ cmd.abm_init_config.abm_init_config_data.bytes = (uint16_t)bytes;
cmd.abm_init_config.abm_init_config_data.version = DMUB_CMD_ABM_CONTROL_VERSION_1;
cmd.abm_init_config.abm_init_config_data.panel_mask = panel_mask;
@@ -237,7 +237,7 @@ bool dmub_abm_save_restore(
cmd.abm_save_restore.header.sub_type = DMUB_CMD__ABM_SAVE_RESTORE;
cmd.abm_save_restore.abm_init_config_data.src.quad_part = dc->dmub_srv->dmub->scratch_mem_fb.gpu_addr;
- cmd.abm_save_restore.abm_init_config_data.bytes = bytes;
+ cmd.abm_save_restore.abm_init_config_data.bytes = (uint16_t)bytes;
cmd.abm_save_restore.abm_init_config_data.version = DMUB_CMD_ABM_CONTROL_VERSION_1;
cmd.abm_save_restore.abm_init_config_data.panel_mask = panel_mask;
@@ -265,10 +265,10 @@ bool dmub_abm_set_pipe(struct abm *abm,
memset(&cmd, 0, sizeof(cmd));
cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
cmd.abm_set_pipe.header.sub_type = DMUB_CMD__ABM_SET_PIPE;
- cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = otg_inst;
- cmd.abm_set_pipe.abm_set_pipe_data.pwrseq_inst = pwrseq_inst;
- cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = option;
- cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = panel_inst;
+ cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = (uint8_t)otg_inst;
+ cmd.abm_set_pipe.abm_set_pipe_data.pwrseq_inst = (uint8_t)pwrseq_inst;
+ cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = (uint8_t)option;
+ cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = (uint8_t)panel_inst;
cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
cmd.abm_set_pipe.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pipe_data);
@@ -308,7 +308,7 @@ bool dmub_abm_set_event(struct abm *abm, unsigned int scaling_enable, unsigned i
memset(&cmd, 0, sizeof(cmd));
cmd.abm_set_event.header.type = DMUB_CMD__ABM;
cmd.abm_set_event.header.sub_type = DMUB_CMD__ABM_SET_EVENT;
- cmd.abm_set_event.abm_set_event_data.vb_scaling_enable = scaling_enable;
+ cmd.abm_set_event.abm_set_event_data.vb_scaling_enable = (uint8_t)scaling_enable;
cmd.abm_set_event.abm_set_event_data.vb_scaling_strength_mapping = scaling_strength_map;
cmd.abm_set_event.abm_set_event_data.panel_mask = (1<<panel_inst);
cmd.abm_set_event.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_event_data);
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
index 87af4fdc04a6..556bae8d4bae 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c
@@ -341,26 +341,26 @@ static bool dmub_psr_copy_settings(struct dmub_psr *dmub,
copy_settings_data->mpcc_inst = pipe_ctx->plane_res.mpcc_inst;
if (pipe_ctx->plane_res.dpp)
- copy_settings_data->dpp_inst = pipe_ctx->plane_res.dpp->inst;
+ copy_settings_data->dpp_inst = (uint8_t)pipe_ctx->plane_res.dpp->inst;
else
copy_settings_data->dpp_inst = 0;
if (pipe_ctx->stream_res.opp)
- copy_settings_data->opp_inst = pipe_ctx->stream_res.opp->inst;
+ copy_settings_data->opp_inst = (uint8_t)pipe_ctx->stream_res.opp->inst;
else
copy_settings_data->opp_inst = 0;
if (pipe_ctx->stream_res.tg)
- copy_settings_data->otg_inst = pipe_ctx->stream_res.tg->inst;
+ copy_settings_data->otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
else
copy_settings_data->otg_inst = 0;
// Misc
copy_settings_data->use_phy_fsm = link->ctx->dc->debug.psr_power_use_phy_fsm;
- copy_settings_data->psr_level = psr_context->psr_level.u32all;
+ copy_settings_data->psr_level = (uint16_t)psr_context->psr_level.u32all;
copy_settings_data->smu_optimizations_en = psr_context->allow_smu_optimizations;
copy_settings_data->multi_disp_optimizations_en = psr_context->allow_multi_disp_optimizations;
- copy_settings_data->frame_delay = psr_context->frame_delay;
+ copy_settings_data->frame_delay = (uint8_t)psr_context->frame_delay;
copy_settings_data->frame_cap_ind = psr_context->psrFrameCaptureIndicationReq;
- copy_settings_data->init_sdp_deadline = psr_context->sdpTransmitLineNumDeadline;
+ copy_settings_data->init_sdp_deadline = (uint16_t)psr_context->sdpTransmitLineNumDeadline;
copy_settings_data->debug.u32All = 0;
copy_settings_data->debug.bitfields.visual_confirm = dc->dc->debug.visual_confirm == VISUAL_CONFIRM_PSR;
copy_settings_data->debug.bitfields.use_hw_lock_mgr = 1;
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
index fbb7ee44c589..6d19da2230ae 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_replay.c
@@ -151,22 +151,23 @@ static bool dmub_replay_copy_settings(struct dmub_replay *dmub,
copy_settings_data->digfe_inst = replay_context->digfe_inst;
if (pipe_ctx->plane_res.dpp)
- copy_settings_data->dpp_inst = pipe_ctx->plane_res.dpp->inst;
+ copy_settings_data->dpp_inst = (uint8_t)pipe_ctx->plane_res.dpp->inst;
else
copy_settings_data->dpp_inst = 0;
+
if (pipe_ctx->stream_res.tg)
- copy_settings_data->otg_inst = pipe_ctx->stream_res.tg->inst;
+ copy_settings_data->otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
else
copy_settings_data->otg_inst = 0;
copy_settings_data->dpphy_inst = link->link_enc->transmitter;
// Misc
- copy_settings_data->line_time_in_ns = replay_context->line_time_in_ns;
- copy_settings_data->panel_inst = panel_inst;
- copy_settings_data->debug.u32All = link->replay_settings.config.debug_flags;
+ copy_settings_data->line_time_in_ns = (uint16_t)replay_context->line_time_in_ns;
+ copy_settings_data->panel_inst = (uint16_t)panel_inst;
+ copy_settings_data->debug.u32All = (uint16_t)link->replay_settings.config.debug_flags;
copy_settings_data->pixel_deviation_per_line = link->dpcd_caps.pr_info.pixel_deviation_per_line;
- copy_settings_data->max_deviation_line = link->dpcd_caps.pr_info.max_deviation_line;
+ copy_settings_data->max_deviation_line = (uint16_t)link->dpcd_caps.pr_info.max_deviation_line;
copy_settings_data->smu_optimizations_en = link->replay_settings.replay_smu_opt_enable;
copy_settings_data->replay_timing_sync_supported = link->replay_settings.config.replay_timing_sync_supported;
copy_settings_data->replay_support_fast_resync_in_ultra_sleep_mode = link->replay_settings.config.replay_support_fast_resync_in_ultra_sleep_mode;
@@ -193,13 +194,13 @@ static bool dmub_replay_copy_settings(struct dmub_replay *dmub,
copy_settings_data->flags.bitfields.alpm_mode = (enum dmub_alpm_mode)link->replay_settings.config.alpm_mode;
if (link->replay_settings.config.alpm_mode == DC_ALPM_AUXLESS) {
- copy_settings_data->auxless_alpm_data.lfps_setup_ns = dc->dc->debug.auxless_alpm_lfps_setup_ns;
- copy_settings_data->auxless_alpm_data.lfps_period_ns = dc->dc->debug.auxless_alpm_lfps_period_ns;
- copy_settings_data->auxless_alpm_data.lfps_silence_ns = dc->dc->debug.auxless_alpm_lfps_silence_ns;
+ copy_settings_data->auxless_alpm_data.lfps_setup_ns = (uint16_t)dc->dc->debug.auxless_alpm_lfps_setup_ns;
+ copy_settings_data->auxless_alpm_data.lfps_period_ns = (uint16_t)dc->dc->debug.auxless_alpm_lfps_period_ns;
+ copy_settings_data->auxless_alpm_data.lfps_silence_ns = (uint16_t)dc->dc->debug.auxless_alpm_lfps_silence_ns;
copy_settings_data->auxless_alpm_data.lfps_t1_t2_override_us =
- dc->dc->debug.auxless_alpm_lfps_t1t2_us;
+ (uint16_t)dc->dc->debug.auxless_alpm_lfps_t1t2_us;
copy_settings_data->auxless_alpm_data.lfps_t1_t2_offset_us =
- dc->dc->debug.auxless_alpm_lfps_t1t2_offset_us;
+ (uint16_t)dc->dc->debug.auxless_alpm_lfps_t1t2_offset_us;
copy_settings_data->auxless_alpm_data.lttpr_count = link->dc->link_srv->dp_get_lttpr_count(link);
}
diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_timing_generator.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_timing_generator.c
index 53c03364f5d4..20c18ac87998 100644
--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_timing_generator.c
+++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_timing_generator.c
@@ -98,7 +98,7 @@ static void program_pix_dur(struct timing_generator *tg, uint32_t pix_clk_100hz)
set_reg_field_value(
value,
- pix_dur,
+ (uint32_t)pix_dur,
DPG_PIPE_ARBITRATION_CONTROL1,
PIXEL_DURATION);
diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
index c702a30563f9..9ffc7fd3212e 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_cm_common.c
@@ -75,8 +75,8 @@ void cm_helper_read_color_matrices(struct dc_context *ctx,
csc_c11, ®val0,
csc_c12, ®val1);
- regval[2 * i] = regval0;
- regval[(2 * i) + 1] = regval1;
+ regval[2 * i] = (uint16_t)regval0;
+ regval[(2 * i) + 1] = (uint16_t)regval1;
i++;
}
diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mmhubbub.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mmhubbub.c
index 6f2a0d5d963b..33a4c07a057c 100644
--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mmhubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mmhubbub.c
@@ -107,35 +107,35 @@ static void mmhubbub3_config_mcif_buf(struct mcif_wb *mcif_wb,
struct dcn30_mmhubbub *mcif_wb30 = TO_DCN30_MMHUBBUB(mcif_wb);
/* buffer address for packing mode or Luma in planar mode */
- REG_UPDATE(MCIF_WB_BUF_1_ADDR_Y, MCIF_WB_BUF_1_ADDR_Y, MCIF_ADDR(params->luma_address[0]));
+ REG_UPDATE(MCIF_WB_BUF_1_ADDR_Y, MCIF_WB_BUF_1_ADDR_Y, (uint32_t)MCIF_ADDR(params->luma_address[0]));
REG_UPDATE(MCIF_WB_BUF_1_ADDR_Y_HIGH, MCIF_WB_BUF_1_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[0]));
/* buffer address for Chroma in planar mode (unused in packing mode) */
- REG_UPDATE(MCIF_WB_BUF_1_ADDR_C, MCIF_WB_BUF_1_ADDR_C, MCIF_ADDR(params->chroma_address[0]));
+ REG_UPDATE(MCIF_WB_BUF_1_ADDR_C, MCIF_WB_BUF_1_ADDR_C, (uint32_t)MCIF_ADDR(params->chroma_address[0]));
REG_UPDATE(MCIF_WB_BUF_1_ADDR_C_HIGH, MCIF_WB_BUF_1_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[0]));
/* buffer address for packing mode or Luma in planar mode */
- REG_UPDATE(MCIF_WB_BUF_2_ADDR_Y, MCIF_WB_BUF_2_ADDR_Y, MCIF_ADDR(params->luma_address[1]));
+ REG_UPDATE(MCIF_WB_BUF_2_ADDR_Y, MCIF_WB_BUF_2_ADDR_Y, (uint32_t)MCIF_ADDR(params->luma_address[1]));
REG_UPDATE(MCIF_WB_BUF_2_ADDR_Y_HIGH, MCIF_WB_BUF_2_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[1]));
/* buffer address for Chroma in planar mode (unused in packing mode) */
- REG_UPDATE(MCIF_WB_BUF_2_ADDR_C, MCIF_WB_BUF_2_ADDR_C, MCIF_ADDR(params->chroma_address[1]));
+ REG_UPDATE(MCIF_WB_BUF_2_ADDR_C, MCIF_WB_BUF_2_ADDR_C, (uint32_t)MCIF_ADDR(params->chroma_address[1]));
REG_UPDATE(MCIF_WB_BUF_2_ADDR_C_HIGH, MCIF_WB_BUF_2_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[1]));
/* buffer address for packing mode or Luma in planar mode */
- REG_UPDATE(MCIF_WB_BUF_3_ADDR_Y, MCIF_WB_BUF_3_ADDR_Y, MCIF_ADDR(params->luma_address[2]));
+ REG_UPDATE(MCIF_WB_BUF_3_ADDR_Y, MCIF_WB_BUF_3_ADDR_Y, (uint32_t)MCIF_ADDR(params->luma_address[2]));
REG_UPDATE(MCIF_WB_BUF_3_ADDR_Y_HIGH, MCIF_WB_BUF_3_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[2]));
/* buffer address for Chroma in planar mode (unused in packing mode) */
- REG_UPDATE(MCIF_WB_BUF_3_ADDR_C, MCIF_WB_BUF_3_ADDR_C, MCIF_ADDR(params->chroma_address[2]));
+ REG_UPDATE(MCIF_WB_BUF_3_ADDR_C, MCIF_WB_BUF_3_ADDR_C, (uint32_t)MCIF_ADDR(params->chroma_address[2]));
REG_UPDATE(MCIF_WB_BUF_3_ADDR_C_HIGH, MCIF_WB_BUF_3_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[2]));
/* buffer address for packing mode or Luma in planar mode */
- REG_UPDATE(MCIF_WB_BUF_4_ADDR_Y, MCIF_WB_BUF_4_ADDR_Y, MCIF_ADDR(params->luma_address[3]));
+ REG_UPDATE(MCIF_WB_BUF_4_ADDR_Y, MCIF_WB_BUF_4_ADDR_Y, (uint32_t)MCIF_ADDR(params->luma_address[3]));
REG_UPDATE(MCIF_WB_BUF_4_ADDR_Y_HIGH, MCIF_WB_BUF_4_ADDR_Y_HIGH, MCIF_ADDR_HIGH(params->luma_address[3]));
/* buffer address for Chroma in planar mode (unused in packing mode) */
- REG_UPDATE(MCIF_WB_BUF_4_ADDR_C, MCIF_WB_BUF_4_ADDR_C, MCIF_ADDR(params->chroma_address[3]));
+ REG_UPDATE(MCIF_WB_BUF_4_ADDR_C, MCIF_WB_BUF_4_ADDR_C, (uint32_t)MCIF_ADDR(params->chroma_address[3]));
REG_UPDATE(MCIF_WB_BUF_4_ADDR_C_HIGH, MCIF_WB_BUF_4_ADDR_C_HIGH, MCIF_ADDR_HIGH(params->chroma_address[3]));
/* setup luma & chroma size
diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c
index 2d33ed0c062d..79d0c2896d29 100644
--- a/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dio/dcn401/dcn401_dio_stream_encoder.c
@@ -43,7 +43,7 @@
#undef FN
#define FN(reg_name, field_name) \
- enc1->se_shift->field_name, enc1->se_mask->field_name
+ (uint8_t)enc1->se_shift->field_name, enc1->se_mask->field_name
#define VBI_LINE_0 0
#define HDMI_CLOCK_CHANNEL_RATE_MORE_340M 340000
diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_stream_encoder.c
index 65afbfcaa96b..55ddb9cf8a52 100644
--- a/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_stream_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_stream_encoder.c
@@ -23,7 +23,7 @@
#undef FN
#define FN(reg_name, field_name) \
- enc1->se_shift->field_name, enc1->se_mask->field_name
+ (uint8_t)enc1->se_shift->field_name, enc1->se_mask->field_name
#define VBI_LINE_0 0
#define HDMI_CLOCK_CHANNEL_RATE_MORE_340M 340000
@@ -401,7 +401,7 @@ void enc42_se_enable_audio_clock(
{
struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
- REG_UPDATE(DIG_FE_AUDIO_CNTL, APG_CLOCK_ENABLE, !!enable);
+ REG_UPDATE(DIG_FE_AUDIO_CNTL, APG_CLOCK_ENABLE, enable);
}
diff --git a/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
index 61553e24d53e..a95f94d6c7c3 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/calcs/dcn_calcs.c
@@ -533,13 +533,14 @@ static void split_stream_across_pipes(
*secondary_pipe = *primary_pipe;
- secondary_pipe->pipe_idx = pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)pipe_idx;
secondary_pipe->plane_res.mi = pool->mis[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.hubp = pool->hubps[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.ipp = pool->ipps[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.xfm = pool->transforms[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.dpp = pool->dpps[secondary_pipe->pipe_idx];
- secondary_pipe->plane_res.mpcc_inst = pool->dpps[secondary_pipe->pipe_idx]->inst;
+ secondary_pipe->plane_res.mpcc_inst =
+ (uint8_t)pool->dpps[secondary_pipe->pipe_idx]->inst;
if (primary_pipe->bottom_pipe) {
ASSERT(primary_pipe->bottom_pipe != secondary_pipe);
secondary_pipe->bottom_pipe = primary_pipe->bottom_pipe;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
index 887744d56d6a..ed9dd2148d86 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
@@ -1402,11 +1402,12 @@ int dcn20_populate_dml_pipes_from_context(struct dc *dc,
timing->h_addressable + timing->h_border_left + timing->h_border_right;
pipes[pipe_cnt].pipe.dest.vactive =
timing->v_addressable + timing->v_border_top + timing->v_border_bottom;
- pipes[pipe_cnt].pipe.dest.interlaced = timing->flags.INTERLACE;
+ pipes[pipe_cnt].pipe.dest.interlaced = (unsigned char)timing->flags.INTERLACE;
pipes[pipe_cnt].pipe.dest.pixel_rate_mhz = timing->pix_clk_100hz/10000.0;
if (timing->timing_3d_format == TIMING_3D_FORMAT_HW_FRAME_PACKING)
pipes[pipe_cnt].pipe.dest.pixel_rate_mhz *= 2;
- pipes[pipe_cnt].pipe.dest.otg_inst = res_ctx->pipe_ctx[i].stream_res.tg->inst;
+ pipes[pipe_cnt].pipe.dest.otg_inst =
+ (unsigned char)res_ctx->pipe_ctx[i].stream_res.tg->inst;
pipes[pipe_cnt].dout.dp_lanes = 4;
pipes[pipe_cnt].dout.dp_rate = dm_dp_rate_na;
pipes[pipe_cnt].dout.is_virtual = 0;
@@ -1879,7 +1880,7 @@ void dcn20_update_bounding_box(struct dc *dc,
bb->clock_limits[i].dram_speed_mts = uclk_states[i] * 16 / 1000;
// FCLK:UCLK ratio is 1.08
- min_fclk_required_by_uclk = div_u64(((unsigned long long)uclk_states[i]) * 1080,
+ min_fclk_required_by_uclk = (int)div_u64(((unsigned long long)uclk_states[i]) * 1080,
1000000);
bb->clock_limits[i].fabricclk_mhz = (min_fclk_required_by_uclk < min_dcfclk) ?
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c
index 0cdd60869ce1..354641312acc 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/dcn30_fpu.c
@@ -662,7 +662,7 @@ void dcn3_fpu_build_wm_range_table(struct clk_mgr *base)
double pstate_latency_us = base->ctx->dc->dml.soc.dram_clock_change_latency_us;
double sr_exit_time_us = base->ctx->dc->dml.soc.sr_exit_time_us;
double sr_enter_plus_exit_time_us = base->ctx->dc->dml.soc.sr_enter_plus_exit_time_us;
- uint16_t min_uclk_mhz = base->bw_params->clk_table.entries[0].memclk_mhz;
+ uint16_t min_uclk_mhz = (uint16_t)base->bw_params->clk_table.entries[0].memclk_mhz;
dc_assert_fp_enabled();
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
index eb199215d298..f5ddf771e73d 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn32/dcn32_fpu.c
@@ -191,21 +191,24 @@ void dcn32_build_wm_range_table_fpu(struct clk_mgr_internal *clk_mgr)
double sr_exit_time_us = clk_mgr->base.ctx->dc->dml.soc.sr_exit_time_us;
double sr_enter_plus_exit_time_us = clk_mgr->base.ctx->dc->dml.soc.sr_enter_plus_exit_time_us;
/* For min clocks use as reported by PM FW and report those as min */
- uint16_t min_uclk_mhz = clk_mgr->base.bw_params->clk_table.entries[0].memclk_mhz;
- uint16_t min_dcfclk_mhz = clk_mgr->base.bw_params->clk_table.entries[0].dcfclk_mhz;
+ uint16_t min_uclk_mhz = (uint16_t)clk_mgr->base.bw_params->clk_table.entries[0].memclk_mhz;
+ uint16_t min_dcfclk_mhz = (uint16_t)clk_mgr->base.bw_params->clk_table.entries[0].dcfclk_mhz;
uint16_t setb_min_uclk_mhz = min_uclk_mhz;
- uint16_t dcfclk_mhz_for_the_second_state = clk_mgr->base.ctx->dc->dml.soc.clock_limits[2].dcfclk_mhz;
+ uint16_t dcfclk_mhz_for_the_second_state =
+ (uint16_t)clk_mgr->base.ctx->dc->dml.soc.clock_limits[2].dcfclk_mhz;
dc_assert_fp_enabled();
/* For Set B ranges use min clocks state 2 when available, and report those to PM FW */
- if (dcfclk_mhz_for_the_second_state)
- clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.min_dcfclk = dcfclk_mhz_for_the_second_state;
- else
- clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.min_dcfclk = clk_mgr->base.bw_params->clk_table.entries[0].dcfclk_mhz;
+ if (dcfclk_mhz_for_the_second_state) {
+ clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.min_dcfclk =
+ (uint16_t)dcfclk_mhz_for_the_second_state;
+ } else
+ clk_mgr->base.bw_params->wm_table.nv_entries[WM_B].pmfw_breakdown.min_dcfclk =
+ (uint16_t)clk_mgr->base.bw_params->clk_table.entries[0].dcfclk_mhz;
if (clk_mgr->base.bw_params->clk_table.entries[2].memclk_mhz)
- setb_min_uclk_mhz = clk_mgr->base.bw_params->clk_table.entries[2].memclk_mhz;
+ setb_min_uclk_mhz = (uint16_t)clk_mgr->base.bw_params->clk_table.entries[2].memclk_mhz;
/* Set A - Normal - default values */
clk_mgr->base.bw_params->wm_table.nv_entries[WM_A].valid = true;
@@ -901,7 +904,7 @@ static bool subvp_vblank_schedulable(struct dc *dc, struct dc_state *context)
struct pipe_ctx *subvp_pipe = NULL;
bool found = false;
bool schedulable = false;
- uint32_t i = 0;
+ uint8_t i = 0;
uint8_t vblank_index = 0;
uint16_t prefetch_us = 0;
uint16_t mall_region_us = 0;
@@ -986,7 +989,7 @@ static bool subvp_subvp_admissable(struct dc *dc,
struct dc_state *context)
{
bool result = false;
- uint32_t i;
+ uint8_t i;
uint8_t subvp_count = 0;
uint32_t min_refresh = subvp_high_refresh_list.min_refresh, max_refresh = 0;
uint64_t refresh_rate = 0;
@@ -1779,7 +1782,7 @@ static struct pipe_ctx *dcn32_find_split_pipe(
if (old_index >= 0 && context->res_ctx.pipe_ctx[old_index].stream == NULL) {
pipe = &context->res_ctx.pipe_ctx[old_index];
- pipe->pipe_idx = old_index;
+ pipe->pipe_idx = (uint8_t)old_index;
}
if (!pipe)
@@ -1788,7 +1791,7 @@ static struct pipe_ctx *dcn32_find_split_pipe(
&& dc->current_state->res_ctx.pipe_ctx[i].prev_odm_pipe == NULL) {
if (context->res_ctx.pipe_ctx[i].stream == NULL) {
pipe = &context->res_ctx.pipe_ctx[i];
- pipe->pipe_idx = i;
+ pipe->pipe_idx = (uint8_t)i;
break;
}
}
@@ -1803,7 +1806,7 @@ static struct pipe_ctx *dcn32_find_split_pipe(
for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
if (context->res_ctx.pipe_ctx[i].stream == NULL) {
pipe = &context->res_ctx.pipe_ctx[i];
- pipe->pipe_idx = i;
+ pipe->pipe_idx = (uint8_t)i;
break;
}
}
@@ -1846,13 +1849,13 @@ static bool dcn32_split_stream_for_mpc_or_odm(
*sec_pipe = *pri_pipe;
- sec_pipe->pipe_idx = pipe_idx;
+ sec_pipe->pipe_idx = (uint8_t)pipe_idx;
sec_pipe->plane_res.mi = pool->mis[pipe_idx];
sec_pipe->plane_res.hubp = pool->hubps[pipe_idx];
sec_pipe->plane_res.ipp = pool->ipps[pipe_idx];
sec_pipe->plane_res.xfm = pool->transforms[pipe_idx];
sec_pipe->plane_res.dpp = pool->dpps[pipe_idx];
- sec_pipe->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+ sec_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[pipe_idx]->inst;
sec_pipe->stream_res.dsc = NULL;
if (odm) {
if (pri_pipe->next_odm_pipe) {
@@ -3365,8 +3368,8 @@ bool dcn32_allow_subvp_with_active_margin(struct pipe_ctx *pipe)
refresh_rate = (pipe->stream->timing.pix_clk_100hz * (uint64_t)100 +
(uint64_t)pipe->stream->timing.v_total * pipe->stream->timing.h_total - (uint64_t)1);
- refresh_rate = div_u64(refresh_rate, pipe->stream->timing.v_total);
- refresh_rate = div_u64(refresh_rate, pipe->stream->timing.h_total);
+ refresh_rate = (uint32_t)div_u64(refresh_rate, pipe->stream->timing.v_total);
+ refresh_rate = (uint32_t)div_u64(refresh_rate, pipe->stream->timing.h_total);
if (refresh_rate >= min_refresh && refresh_rate <= max_refresh &&
dcn32_check_native_scaling_for_res(pipe, width, height)) {
diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c
index 6e1e759462bf..9e63d075c1cf 100644
--- a/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c
+++ b/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c
@@ -405,12 +405,12 @@ bool dsc_prepare_config(const struct dsc_config *dsc_cfg, struct dsc_reg_values
dsc_reg_vals->pixel_format = dsc_dc_pixel_encoding_to_dsc_pixel_format(dsc_cfg->pixel_encoding, dsc_cfg->dc_dsc_cfg.ycbcr422_simple);
dsc_reg_vals->num_slices_h = dsc_cfg->dc_dsc_cfg.num_slices_h;
dsc_reg_vals->num_slices_v = dsc_cfg->dc_dsc_cfg.num_slices_v;
- dsc_reg_vals->pps.dsc_version_minor = dsc_cfg->dc_dsc_cfg.version_minor;
- dsc_reg_vals->pps.pic_width = dsc_cfg->pic_width;
- dsc_reg_vals->pps.pic_height = dsc_cfg->pic_height;
+ dsc_reg_vals->pps.dsc_version_minor = (u8)dsc_cfg->dc_dsc_cfg.version_minor;
+ dsc_reg_vals->pps.pic_width = (u16)dsc_cfg->pic_width;
+ dsc_reg_vals->pps.pic_height = (u16)dsc_cfg->pic_height;
dsc_reg_vals->pps.bits_per_component = dsc_dc_color_depth_to_dsc_bits_per_comp(dsc_cfg->color_depth);
dsc_reg_vals->pps.block_pred_enable = dsc_cfg->dc_dsc_cfg.block_pred_enable;
- dsc_reg_vals->pps.line_buf_depth = dsc_cfg->dc_dsc_cfg.linebuf_depth;
+ dsc_reg_vals->pps.line_buf_depth = (u8)dsc_cfg->dc_dsc_cfg.linebuf_depth;
dsc_reg_vals->alternate_ich_encoding_en = dsc_reg_vals->pps.dsc_version_minor == 1 ? 0 : 1;
dsc_reg_vals->ich_reset_at_eol = (dsc_cfg->is_odm || dsc_reg_vals->num_slices_h > 1) ? 0xF : 0;
@@ -428,9 +428,9 @@ bool dsc_prepare_config(const struct dsc_config *dsc_cfg, struct dsc_reg_values
dsc_reg_vals->bpp_x32 = dsc_cfg->dc_dsc_cfg.bits_per_pixel << 1;
if (dsc_reg_vals->pixel_format == DSC_PIXFMT_NATIVE_YCBCR420 || dsc_reg_vals->pixel_format == DSC_PIXFMT_NATIVE_YCBCR422)
- dsc_reg_vals->pps.bits_per_pixel = dsc_reg_vals->bpp_x32;
+ dsc_reg_vals->pps.bits_per_pixel = (u16)dsc_reg_vals->bpp_x32;
else
- dsc_reg_vals->pps.bits_per_pixel = dsc_reg_vals->bpp_x32 >> 1;
+ dsc_reg_vals->pps.bits_per_pixel = (u16)(dsc_reg_vals->bpp_x32 >> 1);
dsc_reg_vals->pps.convert_rgb = dsc_reg_vals->pixel_format == DSC_PIXFMT_RGB ? 1 : 0;
dsc_reg_vals->pps.native_422 = (dsc_reg_vals->pixel_format == DSC_PIXFMT_NATIVE_YCBCR422);
diff --git a/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c b/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c
index 59864130cf83..a34031b5c9d5 100644
--- a/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c
+++ b/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c
@@ -72,27 +72,27 @@ static void copy_rc_to_cfg(struct drm_dsc_config *dsc_cfg, const struct rc_param
{
int i;
- dsc_cfg->rc_quant_incr_limit0 = rc->rc_quant_incr_limit0;
- dsc_cfg->rc_quant_incr_limit1 = rc->rc_quant_incr_limit1;
- dsc_cfg->initial_offset = rc->initial_fullness_offset;
- dsc_cfg->initial_xmit_delay = rc->initial_xmit_delay;
- dsc_cfg->first_line_bpg_offset = rc->first_line_bpg_offset;
- dsc_cfg->second_line_bpg_offset = rc->second_line_bpg_offset;
- dsc_cfg->flatness_min_qp = rc->flatness_min_qp;
- dsc_cfg->flatness_max_qp = rc->flatness_max_qp;
+ dsc_cfg->rc_quant_incr_limit0 = (u8)rc->rc_quant_incr_limit0;
+ dsc_cfg->rc_quant_incr_limit1 = (u8)rc->rc_quant_incr_limit1;
+ dsc_cfg->initial_offset = (u16)rc->initial_fullness_offset;
+ dsc_cfg->initial_xmit_delay = (u16)rc->initial_xmit_delay;
+ dsc_cfg->first_line_bpg_offset = (u8)rc->first_line_bpg_offset;
+ dsc_cfg->second_line_bpg_offset = (u8)rc->second_line_bpg_offset;
+ dsc_cfg->flatness_min_qp = (u8)rc->flatness_min_qp;
+ dsc_cfg->flatness_max_qp = (u8)rc->flatness_max_qp;
for (i = 0; i < QP_SET_SIZE; ++i) {
- dsc_cfg->rc_range_params[i].range_min_qp = rc->qp_min[i];
- dsc_cfg->rc_range_params[i].range_max_qp = rc->qp_max[i];
+ dsc_cfg->rc_range_params[i].range_min_qp = (u8)rc->qp_min[i];
+ dsc_cfg->rc_range_params[i].range_max_qp = (u8)rc->qp_max[i];
/* Truncate 8-bit signed value to 6-bit signed value */
dsc_cfg->rc_range_params[i].range_bpg_offset = 0x3f & rc->ofs[i];
}
- dsc_cfg->rc_model_size = rc->rc_model_size;
- dsc_cfg->rc_edge_factor = rc->rc_edge_factor;
- dsc_cfg->rc_tgt_offset_high = rc->rc_tgt_offset_hi;
- dsc_cfg->rc_tgt_offset_low = rc->rc_tgt_offset_lo;
+ dsc_cfg->rc_model_size = (u16)rc->rc_model_size;
+ dsc_cfg->rc_edge_factor = (u8)rc->rc_edge_factor;
+ dsc_cfg->rc_tgt_offset_high = (u8)rc->rc_tgt_offset_hi;
+ dsc_cfg->rc_tgt_offset_low = (u8)rc->rc_tgt_offset_lo;
for (i = 0; i < QP_SET_SIZE - 1; ++i)
- dsc_cfg->rc_buf_thresh[i] = rc->rc_buf_thresh[i];
+ dsc_cfg->rc_buf_thresh[i] = (u16)rc->rc_buf_thresh[i];
}
int dscc_compute_dsc_parameters(const struct drm_dsc_config *pps,
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_ddc.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_ddc.c
index d9e6e70dc394..b99a361e68e6 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_ddc.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_ddc.c
@@ -36,7 +36,7 @@
#undef FN
#define FN(reg_name, field_name) \
- ddc->shifts->field_name, ddc->masks->field_name
+ gpio_reg_shift(ddc->shifts->field_name), ddc->masks->field_name
#define CTX \
ddc->base.base.ctx
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_generic.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_generic.c
index 6cd50232c432..1bb295f6f3b9 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_generic.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_generic.c
@@ -37,7 +37,7 @@
#undef FN
#define FN(reg_name, field_name) \
- generic->shifts->field_name, generic->masks->field_name
+ gpio_reg_shift(generic->shifts->field_name), generic->masks->field_name
#define CTX \
generic->base.base.ctx
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c
index f0d400972897..07651dc7e205 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c
@@ -32,7 +32,7 @@
#undef FN
#define FN(reg_name, field_name) \
- gpio->regs->field_name ## _shift, gpio->regs->field_name ## _mask
+ gpio_reg_shift(gpio->regs->field_name ## _shift), gpio->regs->field_name ## _mask
#define CTX \
gpio->base.ctx
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.h b/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.h
index bca0cef18ff9..5b551068cdd6 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.h
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.h
@@ -141,4 +141,13 @@ enum gpio_result dal_hw_gpio_change_mode(
void dal_hw_gpio_close(
struct hw_gpio_pin *ptr);
+/*
+ * Shared helper used by all GPIO register helpers that pass a field shift
+ * (stored as uint32_t) into register functions that expect uint8_t.
+ */
+static inline uint8_t gpio_reg_shift(uint32_t shift)
+{
+ return (uint8_t)shift;
+}
+
#endif
diff --git a/drivers/gpu/drm/amd/display/dc/gpio/hw_hpd.c b/drivers/gpu/drm/amd/display/dc/gpio/hw_hpd.c
index 01ec451004f7..b81a2f2630a6 100644
--- a/drivers/gpu/drm/amd/display/dc/gpio/hw_hpd.c
+++ b/drivers/gpu/drm/amd/display/dc/gpio/hw_hpd.c
@@ -35,7 +35,7 @@
#undef FN
#define FN(reg_name, field_name) \
- hpd->shifts->field_name, hpd->masks->field_name
+ gpio_reg_shift(hpd->shifts->field_name), hpd->masks->field_name
#define CTX \
hpd->base.base.ctx
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn10/dcn10_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn10/dcn10_hubbub.c
index 97ef8281a476..d683d0740c13 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn10/dcn10_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn10/dcn10_hubbub.c
@@ -638,27 +638,27 @@ void hubbub1_update_dchub(
SDPIF_FB_BASE, 0x0FFFF);
REG_UPDATE(DCHUBBUB_SDPIF_AGP_BASE,
- SDPIF_AGP_BASE, dh_data->zfb_phys_addr_base >> 22);
+ SDPIF_AGP_BASE, (uint32_t)(dh_data->zfb_phys_addr_base >> 22));
REG_UPDATE(DCHUBBUB_SDPIF_AGP_BOT,
- SDPIF_AGP_BOT, dh_data->zfb_mc_base_addr >> 22);
+ SDPIF_AGP_BOT, (uint32_t)(dh_data->zfb_mc_base_addr >> 22));
REG_UPDATE(DCHUBBUB_SDPIF_AGP_TOP,
- SDPIF_AGP_TOP, (dh_data->zfb_mc_base_addr +
- dh_data->zfb_size_in_byte - 1) >> 22);
+ SDPIF_AGP_TOP, (uint32_t)((dh_data->zfb_mc_base_addr +
+ dh_data->zfb_size_in_byte - 1) >> 22));
break;
case FRAME_BUFFER_MODE_MIXED_ZFB_AND_LOCAL:
/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
REG_UPDATE(DCHUBBUB_SDPIF_AGP_BASE,
- SDPIF_AGP_BASE, dh_data->zfb_phys_addr_base >> 22);
+ SDPIF_AGP_BASE, (uint32_t)(dh_data->zfb_phys_addr_base >> 22));
REG_UPDATE(DCHUBBUB_SDPIF_AGP_BOT,
- SDPIF_AGP_BOT, dh_data->zfb_mc_base_addr >> 22);
+ SDPIF_AGP_BOT, (uint32_t)(dh_data->zfb_mc_base_addr >> 22));
REG_UPDATE(DCHUBBUB_SDPIF_AGP_TOP,
- SDPIF_AGP_TOP, (dh_data->zfb_mc_base_addr +
- dh_data->zfb_size_in_byte - 1) >> 22);
+ SDPIF_AGP_TOP, (uint32_t)((dh_data->zfb_mc_base_addr +
+ dh_data->zfb_size_in_byte - 1) >> 22));
break;
case FRAME_BUFFER_MODE_LOCAL_ONLY:
/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.c
index 5c6f7ddafd6b..053a08b6d3a3 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.c
@@ -398,17 +398,17 @@ int hubbub2_init_dchub_sys_ctx(struct hubbub *hubbub,
struct dcn_vmid_page_table_config phys_config;
REG_SET(DCN_VM_FB_LOCATION_BASE, 0,
- FB_BASE, pa_config->system_aperture.fb_base >> 24);
+ FB_BASE, ADDR_HI24(pa_config->system_aperture.fb_base));
REG_SET(DCN_VM_FB_LOCATION_TOP, 0,
- FB_TOP, pa_config->system_aperture.fb_top >> 24);
+ FB_TOP, ADDR_HI24(pa_config->system_aperture.fb_top));
REG_SET(DCN_VM_FB_OFFSET, 0,
- FB_OFFSET, pa_config->system_aperture.fb_offset >> 24);
+ FB_OFFSET, ADDR_HI24(pa_config->system_aperture.fb_offset));
REG_SET(DCN_VM_AGP_BOT, 0,
- AGP_BOT, pa_config->system_aperture.agp_bot >> 24);
+ AGP_BOT, ADDR_HI24(pa_config->system_aperture.agp_bot));
REG_SET(DCN_VM_AGP_TOP, 0,
- AGP_TOP, pa_config->system_aperture.agp_top >> 24);
+ AGP_TOP, ADDR_HI24(pa_config->system_aperture.agp_top));
REG_SET(DCN_VM_AGP_BASE, 0,
- AGP_BASE, pa_config->system_aperture.agp_base >> 24);
+ AGP_BASE, ADDR_HI24(pa_config->system_aperture.agp_base));
REG_SET(DCN_VM_PROTECTION_FAULT_DEFAULT_ADDR_MSB, 0,
DCN_VM_PROTECTION_FAULT_DEFAULT_ADDR_MSB, (pa_config->page_table_default_page_addr >> 44) & 0xF);
@@ -447,36 +447,36 @@ void hubbub2_update_dchub(struct hubbub *hubbub,
/*This field defines the 24 MSBs, bits [47:24] of the 48 bit AGP Base*/
REG_UPDATE(DCN_VM_AGP_BASE,
- AGP_BASE, dh_data->zfb_phys_addr_base >> 24);
+ AGP_BASE, ADDR_HI24(dh_data->zfb_phys_addr_base));
/*This field defines the bottom range of the AGP aperture and represents the 24*/
/*MSBs, bits [47:24] of the 48 address bits*/
REG_UPDATE(DCN_VM_AGP_BOT,
- AGP_BOT, dh_data->zfb_mc_base_addr >> 24);
+ AGP_BOT, ADDR_HI24(dh_data->zfb_mc_base_addr));
/*This field defines the top range of the AGP aperture and represents the 24*/
/*MSBs, bits [47:24] of the 48 address bits*/
REG_UPDATE(DCN_VM_AGP_TOP,
- AGP_TOP, (dh_data->zfb_mc_base_addr +
- dh_data->zfb_size_in_byte - 1) >> 24);
+ AGP_TOP, ADDR_HI24(dh_data->zfb_mc_base_addr +
+ dh_data->zfb_size_in_byte - 1));
break;
case FRAME_BUFFER_MODE_MIXED_ZFB_AND_LOCAL:
/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
/*This field defines the 24 MSBs, bits [47:24] of the 48 bit AGP Base*/
REG_UPDATE(DCN_VM_AGP_BASE,
- AGP_BASE, dh_data->zfb_phys_addr_base >> 24);
+ AGP_BASE, ADDR_HI24(dh_data->zfb_phys_addr_base));
/*This field defines the bottom range of the AGP aperture and represents the 24*/
/*MSBs, bits [47:24] of the 48 address bits*/
REG_UPDATE(DCN_VM_AGP_BOT,
- AGP_BOT, dh_data->zfb_mc_base_addr >> 24);
+ AGP_BOT, ADDR_HI24(dh_data->zfb_mc_base_addr));
/*This field defines the top range of the AGP aperture and represents the 24*/
/*MSBs, bits [47:24] of the 48 address bits*/
REG_UPDATE(DCN_VM_AGP_TOP,
- AGP_TOP, (dh_data->zfb_mc_base_addr +
- dh_data->zfb_size_in_byte - 1) >> 24);
+ AGP_TOP, ADDR_HI24(dh_data->zfb_mc_base_addr +
+ dh_data->zfb_size_in_byte - 1));
break;
case FRAME_BUFFER_MODE_LOCAL_ONLY:
/*Should not touch FB LOCATION (should be done by VBIOS)*/
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.h b/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.h
index 46d8f5c70750..6223dfaee270 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.h
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn20/dcn20_hubbub.h
@@ -141,4 +141,7 @@ void hubbub2_wm_read_state(struct hubbub *hubbub,
void hubbub2_read_state(struct hubbub *hubbub,
struct dcn_hubbub_state *hubbub_state);
+/* Extract bits [47:24] of a physical address for hardware register fields */
+#define ADDR_HI24(a) ((uint32_t)((uint64_t)(a) >> 24))
+
#endif
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn21/dcn21_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn21/dcn21_hubbub.c
index e4496ad203b2..d790d6ee359a 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn21/dcn21_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn21/dcn21_hubbub.c
@@ -111,17 +111,17 @@ int hubbub21_init_dchub(struct hubbub *hubbub,
struct dcn_vmid_page_table_config phys_config;
REG_SET(DCN_VM_FB_LOCATION_BASE, 0,
- FB_BASE, pa_config->system_aperture.fb_base >> 24);
+ FB_BASE, ADDR_HI24(pa_config->system_aperture.fb_base));
REG_SET(DCN_VM_FB_LOCATION_TOP, 0,
- FB_TOP, pa_config->system_aperture.fb_top >> 24);
+ FB_TOP, ADDR_HI24(pa_config->system_aperture.fb_top));
REG_SET(DCN_VM_FB_OFFSET, 0,
- FB_OFFSET, pa_config->system_aperture.fb_offset >> 24);
+ FB_OFFSET, ADDR_HI24(pa_config->system_aperture.fb_offset));
REG_SET(DCN_VM_AGP_BOT, 0,
- AGP_BOT, pa_config->system_aperture.agp_bot >> 24);
+ AGP_BOT, ADDR_HI24(pa_config->system_aperture.agp_bot));
REG_SET(DCN_VM_AGP_TOP, 0,
- AGP_TOP, pa_config->system_aperture.agp_top >> 24);
+ AGP_TOP, ADDR_HI24(pa_config->system_aperture.agp_top));
REG_SET(DCN_VM_AGP_BASE, 0,
- AGP_BASE, pa_config->system_aperture.agp_base >> 24);
+ AGP_BASE, ADDR_HI24(pa_config->system_aperture.agp_base));
if (pa_config->gart_config.page_table_start_addr != pa_config->gart_config.page_table_end_addr) {
phys_config.page_table_start_addr = pa_config->gart_config.page_table_start_addr >> 12;
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
index 181a93dc46e6..6a7c1bee5747 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
@@ -68,17 +68,17 @@ int hubbub3_init_dchub_sys_ctx(struct hubbub *hubbub,
struct dcn_vmid_page_table_config phys_config;
REG_SET(DCN_VM_FB_LOCATION_BASE, 0,
- FB_BASE, pa_config->system_aperture.fb_base >> 24);
+ FB_BASE, ADDR_HI24(pa_config->system_aperture.fb_base));
REG_SET(DCN_VM_FB_LOCATION_TOP, 0,
- FB_TOP, pa_config->system_aperture.fb_top >> 24);
+ FB_TOP, ADDR_HI24(pa_config->system_aperture.fb_top));
REG_SET(DCN_VM_FB_OFFSET, 0,
- FB_OFFSET, pa_config->system_aperture.fb_offset >> 24);
+ FB_OFFSET, ADDR_HI24(pa_config->system_aperture.fb_offset));
REG_SET(DCN_VM_AGP_BOT, 0,
- AGP_BOT, pa_config->system_aperture.agp_bot >> 24);
+ AGP_BOT, ADDR_HI24(pa_config->system_aperture.agp_bot));
REG_SET(DCN_VM_AGP_TOP, 0,
- AGP_TOP, pa_config->system_aperture.agp_top >> 24);
+ AGP_TOP, ADDR_HI24(pa_config->system_aperture.agp_top));
REG_SET(DCN_VM_AGP_BASE, 0,
- AGP_BASE, pa_config->system_aperture.agp_base >> 24);
+ AGP_BASE, ADDR_HI24(pa_config->system_aperture.agp_base));
if (pa_config->gart_config.page_table_start_addr != pa_config->gart_config.page_table_end_addr) {
phys_config.page_table_start_addr = pa_config->gart_config.page_table_start_addr >> 12;
diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
index 3c298192f359..79cb506be5cb 100644
--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
@@ -910,17 +910,17 @@ int hubbub31_init_dchub_sys_ctx(struct hubbub *hubbub,
struct dcn_vmid_page_table_config phys_config;
REG_SET(DCN_VM_FB_LOCATION_BASE, 0,
- FB_BASE, pa_config->system_aperture.fb_base >> 24);
+ FB_BASE, ADDR_HI24(pa_config->system_aperture.fb_base));
REG_SET(DCN_VM_FB_LOCATION_TOP, 0,
- FB_TOP, pa_config->system_aperture.fb_top >> 24);
+ FB_TOP, ADDR_HI24(pa_config->system_aperture.fb_top));
REG_SET(DCN_VM_FB_OFFSET, 0,
- FB_OFFSET, pa_config->system_aperture.fb_offset >> 24);
+ FB_OFFSET, ADDR_HI24(pa_config->system_aperture.fb_offset));
REG_SET(DCN_VM_AGP_BOT, 0,
- AGP_BOT, pa_config->system_aperture.agp_bot >> 24);
+ AGP_BOT, ADDR_HI24(pa_config->system_aperture.agp_bot));
REG_SET(DCN_VM_AGP_TOP, 0,
- AGP_TOP, pa_config->system_aperture.agp_top >> 24);
+ AGP_TOP, ADDR_HI24(pa_config->system_aperture.agp_top));
REG_SET(DCN_VM_AGP_BASE, 0,
- AGP_BASE, pa_config->system_aperture.agp_base >> 24);
+ AGP_BASE, ADDR_HI24(pa_config->system_aperture.agp_base));
if (pa_config->gart_config.page_table_start_addr != pa_config->gart_config.page_table_end_addr) {
phys_config.page_table_start_addr = pa_config->gart_config.page_table_start_addr >> 12;
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
index ceee5165fd6a..244d4462fa9e 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
@@ -68,10 +68,10 @@ void hubp2_set_vm_system_aperture_settings(struct hubp *hubp,
DCN_VM_SYSTEM_APERTURE_DEFAULT_ADDR_LSB, mc_vm_apt_default.low_part);
REG_SET(DCN_VM_SYSTEM_APERTURE_LOW_ADDR, 0,
- MC_VM_SYSTEM_APERTURE_LOW_ADDR, mc_vm_apt_low.quad_part);
+ MC_VM_SYSTEM_APERTURE_LOW_ADDR, mc_vm_apt_low.low_part);
REG_SET(DCN_VM_SYSTEM_APERTURE_HIGH_ADDR, 0,
- MC_VM_SYSTEM_APERTURE_HIGH_ADDR, mc_vm_apt_high.quad_part);
+ MC_VM_SYSTEM_APERTURE_HIGH_ADDR, mc_vm_apt_high.low_part);
REG_SET_2(DCN_VM_MX_L1_TLB_CNTL, 0,
ENABLE_L1_TLB, 1,
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
index 08ea0a1b9e7f..67828505939a 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
@@ -238,10 +238,10 @@ static void hubp21_set_vm_system_aperture_settings(struct hubp *hubp,
mc_vm_apt_high.quad_part = apt->sys_high.quad_part >> 18;
REG_SET(DCN_VM_SYSTEM_APERTURE_LOW_ADDR, 0,
- MC_VM_SYSTEM_APERTURE_LOW_ADDR, mc_vm_apt_low.quad_part);
+ MC_VM_SYSTEM_APERTURE_LOW_ADDR, mc_vm_apt_low.low_part);
REG_SET(DCN_VM_SYSTEM_APERTURE_HIGH_ADDR, 0,
- MC_VM_SYSTEM_APERTURE_HIGH_ADDR, mc_vm_apt_high.quad_part);
+ MC_VM_SYSTEM_APERTURE_HIGH_ADDR, mc_vm_apt_high.low_part);
REG_SET_2(DCN_VM_MX_L1_TLB_CNTL, 0,
ENABLE_L1_TLB, 1,
diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
index e2708e30eb1b..3e5ae0eae39f 100644
--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
@@ -55,10 +55,10 @@ void hubp3_set_vm_system_aperture_settings(struct hubp *hubp,
mc_vm_apt_high.quad_part = apt->sys_high.quad_part >> 18;
REG_SET(DCN_VM_SYSTEM_APERTURE_LOW_ADDR, 0,
- MC_VM_SYSTEM_APERTURE_LOW_ADDR, mc_vm_apt_low.quad_part);
+ MC_VM_SYSTEM_APERTURE_LOW_ADDR, mc_vm_apt_low.low_part);
REG_SET(DCN_VM_SYSTEM_APERTURE_HIGH_ADDR, 0,
- MC_VM_SYSTEM_APERTURE_HIGH_ADDR, mc_vm_apt_high.quad_part);
+ MC_VM_SYSTEM_APERTURE_HIGH_ADDR, mc_vm_apt_high.low_part);
REG_SET_2(DCN_VM_MX_L1_TLB_CNTL, 0,
ENABLE_L1_TLB, 1,
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
index 5273ca09fe12..f2ac516b685f 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
@@ -858,7 +858,7 @@ void dce110_edp_power_control(
DC_LOG_HW_RESUME_S3(
"%s: remaining_min_edp_poweroff_time_ms=%llu: begin wait.\n",
__func__, remaining_min_edp_poweroff_time_ms);
- msleep(remaining_min_edp_poweroff_time_ms);
+ msleep((unsigned int)remaining_min_edp_poweroff_time_ms);
DC_LOG_HW_RESUME_S3(
"%s: remaining_min_edp_poweroff_time_ms=%llu: end wait.\n",
__func__, remaining_min_edp_poweroff_time_ms);
@@ -883,7 +883,7 @@ void dce110_edp_power_control(
cntl.coherent = false;
cntl.lanes_number = LANE_COUNT_FOUR;
cntl.hpd_sel = link->link_enc->hpd_source;
- pwrseq_instance = link->panel_cntl->pwrseq_inst;
+ pwrseq_instance = (uint8_t)link->panel_cntl->pwrseq_inst;
if (ctx->dc->ctx->dmub_srv &&
ctx->dc->debug.dmub_command_table) {
@@ -952,7 +952,7 @@ void dce110_edp_wait_for_T12(
t12_duration += link->panel_config.pps.extra_t12_ms; // Add extra T12
if (time_since_edp_poweroff_ms < t12_duration)
- msleep(t12_duration - time_since_edp_poweroff_ms);
+ msleep((unsigned int)(t12_duration - time_since_edp_poweroff_ms));
}
}
/*todo: cloned in stream enc, fix*/
@@ -1021,8 +1021,9 @@ void dce110_edp_backlight_control(
*/
/* dc_service_sleep_in_milliseconds(50); */
/*edp 1.2*/
- if (link->panel_cntl)
- pwrseq_instance = link->panel_cntl->pwrseq_inst;
+ if (link->panel_cntl) {
+ pwrseq_instance = (uint8_t)link->panel_cntl->pwrseq_inst;
+ }
if (cntl.action == TRANSMITTER_CONTROL_BACKLIGHT_ON) {
if (!link->dc->config.edp_no_power_sequencing)
@@ -1439,7 +1440,7 @@ void build_audio_output(
audio_output->crtc_info.pixel_repetition = 1;
audio_output->crtc_info.interlaced =
- stream->timing.flags.INTERLACE;
+ (stream->timing.flags.INTERLACE != 0);
audio_output->crtc_info.refresh_rate =
(stream->timing.pix_clk_100hz*100)/
@@ -1839,7 +1840,7 @@ static void power_down_all_hw_blocks(struct dc *dc)
static void disable_vga_and_power_gate_all_controllers(
struct dc *dc)
{
- int i;
+ uint8_t i;
struct timing_generator *tg;
struct dc_context *ctx = dc->ctx;
@@ -1869,7 +1870,7 @@ static void get_edp_streams(struct dc_state *context,
struct dc_stream_state **edp_streams,
int *edp_stream_num)
{
- int i;
+ uint8_t i;
*edp_stream_num = 0;
for (i = 0; i < context->stream_count; i++) {
@@ -2115,9 +2116,11 @@ static uint32_t compute_pstate_blackout_duration(
const struct dc_stream_state *stream)
{
uint32_t total_dest_line_time_ns;
+ int64_t pstate_blackout_duration_ns64;
uint32_t pstate_blackout_duration_ns;
- pstate_blackout_duration_ns = 1000 * blackout_duration.value >> 24;
+ pstate_blackout_duration_ns64 = (1000 * blackout_duration.value) >> 24;
+ pstate_blackout_duration_ns = (uint32_t)pstate_blackout_duration_ns64;
total_dest_line_time_ns = 1000000UL *
(stream->timing.h_total * 10) /
@@ -2574,7 +2577,7 @@ enum dc_status dce110_apply_ctx_to_hw(
}
hws->funcs.enable_display_power_gating(
- dc, i, dc->ctx->dc_bios,
+ dc, (uint8_t)i, dc->ctx->dc_bios,
PIPE_GATING_CONTROL_DISABLE);
}
@@ -2919,10 +2922,10 @@ static void dce110_init_hw(struct dc *dc)
xfm->funcs->transform_reset(xfm);
hws->funcs.enable_display_power_gating(
- dc, i, bp,
+ dc, (uint8_t)i, bp,
PIPE_GATING_CONTROL_INIT);
hws->funcs.enable_display_power_gating(
- dc, i, bp,
+ dc, (uint8_t)i, bp,
PIPE_GATING_CONTROL_DISABLE);
hws->funcs.enable_display_pipe_clock_gating(
dc->ctx,
@@ -3180,7 +3183,7 @@ static void dce110_power_down_fe(struct dc *dc, struct dc_state *state, struct p
return;
hws->funcs.enable_display_power_gating(
- dc, fe_idx, dc->ctx->dc_bios, PIPE_GATING_CONTROL_ENABLE);
+ dc, (uint8_t)fe_idx, dc->ctx->dc_bios, PIPE_GATING_CONTROL_ENABLE);
dc->res_pool->transforms[fe_idx]->funcs->transform_reset(
dc->res_pool->transforms[fe_idx]);
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce120/dce120_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce120/dce120_hwseq.c
index 0689bbf12ad8..fbe34d1bb39a 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dce120/dce120_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dce120/dce120_hwseq.c
@@ -208,24 +208,24 @@ static void dce120_update_dchub(
FB_BASE, 0x0FFFF);
REG_UPDATE(DCHUB_AGP_BASE,
- AGP_BASE, dh_data->zfb_phys_addr_base >> 22);
+ AGP_BASE, (uint32_t)(dh_data->zfb_phys_addr_base >> 22));
REG_UPDATE(DCHUB_AGP_BOT,
- AGP_BOT, dh_data->zfb_mc_base_addr >> 22);
+ AGP_BOT, (uint32_t)(dh_data->zfb_mc_base_addr >> 22));
REG_UPDATE(DCHUB_AGP_TOP,
- AGP_TOP, (dh_data->zfb_mc_base_addr + dh_data->zfb_size_in_byte - 1) >> 22);
+ AGP_TOP, (uint32_t)((dh_data->zfb_mc_base_addr + dh_data->zfb_size_in_byte - 1) >> 22));
break;
case FRAME_BUFFER_MODE_MIXED_ZFB_AND_LOCAL:
/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
REG_UPDATE(DCHUB_AGP_BASE,
- AGP_BASE, dh_data->zfb_phys_addr_base >> 22);
+ AGP_BASE, (uint32_t)(dh_data->zfb_phys_addr_base >> 22));
REG_UPDATE(DCHUB_AGP_BOT,
- AGP_BOT, dh_data->zfb_mc_base_addr >> 22);
+ AGP_BOT, (uint32_t)(dh_data->zfb_mc_base_addr >> 22));
REG_UPDATE(DCHUB_AGP_TOP,
- AGP_TOP, (dh_data->zfb_mc_base_addr + dh_data->zfb_size_in_byte - 1) >> 22);
+ AGP_TOP, (uint32_t)((dh_data->zfb_mc_base_addr + dh_data->zfb_size_in_byte - 1) >> 22));
break;
case FRAME_BUFFER_MODE_LOCAL_ONLY:
/*Should not touch FB LOCATION (done by VBIOS on AsicInit table)*/
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
index f1fd372e3826..566edc05b99d 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
@@ -1574,7 +1574,7 @@ void dcn10_disable_plane(struct dc *dc, struct dc_state *state, struct pipe_ctx
void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
{
- int i;
+ uint8_t i;
struct dce_hwseq *hws = dc->hwseq;
struct hubbub *hubbub = dc->res_pool->hubbub;
bool can_apply_seamless_boot = false;
@@ -1677,7 +1677,7 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
pipe_ctx->plane_res.hubp = hubp;
pipe_ctx->plane_res.dpp = dpp;
- pipe_ctx->plane_res.mpcc_inst = dpp->inst;
+ pipe_ctx->plane_res.mpcc_inst = (uint8_t)dpp->inst;
hubp->mpcc_id = dpp->inst;
hubp->opp_id = OPP_ID_INVALID;
hubp->power_gated = false;
@@ -2258,7 +2258,7 @@ void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock)
struct dmub_hw_lock_inst_flags inst_flags = { 0 };
hw_locks.bits.lock_cursor = 1;
- inst_flags.opp_inst = pipe->stream_res.opp->inst;
+ inst_flags.opp_inst = (uint8_t)pipe->stream_res.opp->inst;
dmub_hw_lock_mgr_cmd(dc->ctx->dmub_srv,
lock,
@@ -2383,7 +2383,7 @@ static uint8_t get_clock_divider(struct pipe_ctx *pipe,
}
clock_divider *= numpipes;
- return clock_divider;
+ return (uint8_t)clock_divider;
}
static int dcn10_align_pixel_clocks(struct dc *dc, int group_size,
@@ -2458,7 +2458,7 @@ static int dcn10_align_pixel_clocks(struct dc *dc, int group_size,
dc->res_pool->dp_clock_source->funcs->override_dp_pix_clk(
dc->res_pool->dp_clock_source,
grouped_pipes[i]->stream_res.tg->inst,
- phase[i], modulo[i]);
+ (unsigned int)phase[i], (unsigned int)modulo[i]);
dc->res_pool->dp_clock_source->funcs->get_pixel_clk_frequency_100hz(
dc->res_pool->dp_clock_source,
grouped_pipes[i]->stream_res.tg->inst, &pclk);
@@ -3516,7 +3516,7 @@ void dcn10_config_stereo_parameters(
}
}
flags->RIGHT_EYE_POLARITY =\
- stream->timing.flags.RIGHT_EYE_3D_POLARITY;
+ (stream->timing.flags.RIGHT_EYE_3D_POLARITY != 0);
if (timing_3d_format == TIMING_3D_FORMAT_HW_FRAME_PACKING)
flags->FRAME_PACKED = 1;
}
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
index 288e4edaa9a2..c2ea0106fdec 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
@@ -235,7 +235,7 @@ void dcn20_setup_gsl_group_as_lock(
group_idx = find_free_gsl_group(dc);
ASSERT(group_idx != 0);
- pipe_ctx->stream_res.gsl_group = group_idx;
+ pipe_ctx->stream_res.gsl_group = (uint8_t)group_idx;
/* set gsl group reg field and mark resource used */
switch (group_idx) {
@@ -826,7 +826,7 @@ enum dc_status dcn20_enable_stream_timing(
unsigned int event_triggers = 0;
int opp_cnt = 1;
int opp_inst[MAX_PIPES] = {0};
- bool interlace = stream->timing.flags.INTERLACE;
+ bool interlace = (stream->timing.flags.INTERLACE != 0);
int i;
struct mpc_dwb_flow_control flow_control;
struct mpc *mpc = dc->res_pool->mpc;
@@ -1452,7 +1452,7 @@ void dcn20_pipe_control_lock(
struct dmub_hw_lock_inst_flags inst_flags = { 0 };
hw_locks.bits.lock_pipe = 1;
- inst_flags.otg_inst = pipe->stream_res.tg->inst;
+ inst_flags.otg_inst = (uint8_t)pipe->stream_res.tg->inst;
if (pipe->plane_state != NULL)
hw_locks.bits.triple_buffer_lock = pipe->plane_state->triplebuffer_flips;
@@ -2733,7 +2733,8 @@ void dcn20_update_plane_addr(const struct dc *dc, struct pipe_ctx *pipe_ctx)
addr_patched = patch_address_for_sbs_tb_stereo(pipe_ctx, &addr);
// Call Helper to track VMID use
- vm_helper_mark_vmid_used(dc->vm_helper, plane_state->address.vmid, pipe_ctx->plane_res.hubp->inst);
+ vm_helper_mark_vmid_used(dc->vm_helper, plane_state->address.vmid,
+ (uint8_t)pipe_ctx->plane_res.hubp->inst);
pipe_ctx->plane_res.hubp->funcs->hubp_program_surface_flip_and_addr(
pipe_ctx->plane_res.hubp,
@@ -3126,7 +3127,7 @@ void dcn20_program_dmdata_engine(struct pipe_ctx *pipe_ctx)
void dcn20_fpga_init_hw(struct dc *dc)
{
- int i, j;
+ uint8_t i, j;
struct dce_hwseq *hws = dc->hwseq;
struct resource_pool *res_pool = dc->res_pool;
struct dc_state *context = dc->current_state;
@@ -3199,7 +3200,7 @@ void dcn20_fpga_init_hw(struct dc *dc)
pipe_ctx->plane_res.hubp = hubp;
pipe_ctx->plane_res.dpp = dpp;
- pipe_ctx->plane_res.mpcc_inst = dpp->inst;
+ pipe_ctx->plane_res.mpcc_inst = (uint8_t)dpp->inst;
hubp->mpcc_id = dpp->inst;
hubp->opp_id = OPP_ID_INVALID;
hubp->power_gated = false;
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
index 062745389d9a..0988369bd968 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn21/dcn21_hwseq.c
@@ -146,10 +146,10 @@ bool dcn21_dmub_abm_set_pipe(struct abm *abm, uint32_t otg_inst,
memset(&cmd, 0, sizeof(cmd));
cmd.abm_set_pipe.header.type = DMUB_CMD__ABM;
cmd.abm_set_pipe.header.sub_type = DMUB_CMD__ABM_SET_PIPE;
- cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = otg_inst;
- cmd.abm_set_pipe.abm_set_pipe_data.pwrseq_inst = pwrseq_inst;
- cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = option;
- cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = panel_inst;
+ cmd.abm_set_pipe.abm_set_pipe_data.otg_inst = (uint8_t)otg_inst;
+ cmd.abm_set_pipe.abm_set_pipe_data.pwrseq_inst = (uint8_t)pwrseq_inst;
+ cmd.abm_set_pipe.abm_set_pipe_data.set_pipe_option = (uint8_t)option;
+ cmd.abm_set_pipe.abm_set_pipe_data.panel_inst = (uint8_t)panel_inst;
cmd.abm_set_pipe.abm_set_pipe_data.ramping_boundary = ramping_boundary;
cmd.abm_set_pipe.header.payload_bytes = sizeof(struct dmub_cmd_abm_set_pipe_data);
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
index 2aa0f1de8103..2705c58a9150 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn30/dcn30_hwseq.c
@@ -978,7 +978,7 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
cursor_cache_enable ? &cursor_attr : NULL)) {
unsigned int v_total = stream->adjust.v_total_max ?
stream->adjust.v_total_max : stream->timing.v_total;
- unsigned int refresh_hz = div_u64((unsigned long long) stream->timing.pix_clk_100hz *
+ unsigned int refresh_hz = (unsigned int)div_u64((unsigned long long)stream->timing.pix_clk_100hz *
100LL, (v_total * stream->timing.h_total));
/*
@@ -1006,9 +1006,9 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
unsigned int denom = refresh_hz * 6528;
unsigned int stutter_period = dc->current_state->perf_params.stutter_period_us;
- tmr_delay = div_u64(((1000000LL + 2 * stutter_period * refresh_hz) *
+ tmr_delay = (uint32_t)(div_u64(((1000000LL + 2 * stutter_period * refresh_hz) *
(100LL + dc->debug.mall_additional_timer_percent) + denom - 1),
- denom) - 64LL;
+ denom) - 64LL);
/* In some cases the stutter period is really big (tiny modes) in these
* cases MALL cant be enabled, So skip these cases to avoid a ASSERT()
@@ -1030,9 +1030,9 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
}
denom *= 2;
- tmr_delay = div_u64(((1000000LL + 2 * stutter_period * refresh_hz) *
+ tmr_delay = (uint32_t)(div_u64(((1000000LL + 2 * stutter_period * refresh_hz) *
(100LL + dc->debug.mall_additional_timer_percent) + denom - 1),
- denom) - 64LL;
+ denom) - 64LL);
}
/* Copy HW cursor */
@@ -1062,9 +1062,9 @@ bool dcn30_apply_idle_power_optimizations(struct dc *dc, bool enable)
cmd.mall.cursor_copy_src.quad_part = cursor_attr.address.quad_part;
cmd.mall.cursor_copy_dst.quad_part =
(plane->address.grph.cursor_cache_addr.quad_part + 2047) & ~2047;
- cmd.mall.cursor_width = cursor_attr.width;
- cmd.mall.cursor_height = cursor_attr.height;
- cmd.mall.cursor_pitch = cursor_attr.pitch;
+ cmd.mall.cursor_width = (uint16_t)cursor_attr.width;
+ cmd.mall.cursor_height = (uint16_t)cursor_attr.height;
+ cmd.mall.cursor_pitch = (uint16_t)cursor_attr.pitch;
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
index 858a06b03b57..1e856ee508f1 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
@@ -211,7 +211,7 @@ void dcn314_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx
if (pipe_ctx->stream_res.dsc) {
struct pipe_ctx *current_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[pipe_ctx->pipe_idx];
- update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC);
+ update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC != 0);
/* Check if no longer using pipe for ODM, then need to disconnect DSC for that pipe */
if (!pipe_ctx->next_odm_pipe && current_pipe_ctx->next_odm_pipe &&
@@ -419,7 +419,7 @@ void dcn314_resync_fifo_dccg_dio(struct dce_hwseq *hws, struct dc *dc, struct dc
if (dcn314_is_pipe_dig_fifo_on(pipe))
continue;
pipe->stream_res.tg->funcs->disable_crtc(pipe->stream_res.tg);
- reset_sync_context_for_pipe(dc, context, i);
+ reset_sync_context_for_pipe(dc, context, (uint8_t)i);
otg_disabled[i] = true;
}
}
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
index b45ceb570a5c..7dbaaf9403f2 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c
@@ -1175,7 +1175,7 @@ void dcn32_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
if (pipe_ctx->stream_res.dsc) {
struct pipe_ctx *current_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[pipe_ctx->pipe_idx];
- dcn32_update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC);
+ dcn32_update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC != 0);
/* Check if no longer using pipe for ODM, then need to disconnect DSC for that pipe */
if (!pipe_ctx->next_odm_pipe && current_pipe_ctx->next_odm_pipe &&
@@ -1277,7 +1277,7 @@ void dcn32_resync_fifo_dccg_dio(struct dce_hwseq *hws, struct dc *dc, struct dc_
if ((pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))
&& dc_state_get_pipe_subvp_type(dc_state, pipe) != SUBVP_PHANTOM) {
pipe->stream_res.tg->funcs->disable_crtc(pipe->stream_res.tg);
- reset_sync_context_for_pipe(dc, context, i);
+ reset_sync_context_for_pipe(dc, context, (uint8_t)i);
otg_disabled[i] = true;
}
}
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
index f133b52ea958..894d48fcd7f8 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
@@ -467,7 +467,7 @@ void dcn35_update_odm(struct dc *dc, struct dc_state *context, struct pipe_ctx *
if (pipe_ctx->stream_res.dsc) {
struct pipe_ctx *current_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[pipe_ctx->pipe_idx];
- update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC);
+ update_dsc_on_stream(pipe_ctx, pipe_ctx->stream->timing.flags.DSC != 0);
/* Check if no longer using pipe for ODM, then need to disconnect DSC for that pipe */
if (!pipe_ctx->next_odm_pipe && current_pipe_ctx->next_odm_pipe &&
@@ -621,7 +621,7 @@ void dcn35_z10_restore(const struct dc *dc)
void dcn35_init_pipes(struct dc *dc, struct dc_state *context)
{
- int i;
+ uint8_t i;
struct dce_hwseq *hws = dc->hwseq;
struct hubbub *hubbub = dc->res_pool->hubbub;
struct pg_cntl *pg_cntl = dc->res_pool->pg_cntl;
@@ -725,7 +725,7 @@ void dcn35_init_pipes(struct dc *dc, struct dc_state *context)
pipe_ctx->plane_res.hubp = hubp;
pipe_ctx->plane_res.dpp = dpp;
- pipe_ctx->plane_res.mpcc_inst = dpp->inst;
+ pipe_ctx->plane_res.mpcc_inst = (uint8_t)dpp->inst;
hubp->mpcc_id = dpp->inst;
hubp->opp_id = OPP_ID_INVALID;
hubp->power_gated = false;
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
index 7e6bdefb5471..9c505a8a773c 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
@@ -1158,7 +1158,7 @@ static bool dcn401_check_no_memory_request_for_cab(struct dc *dc)
static uint32_t dcn401_calculate_cab_allocation(struct dc *dc, struct dc_state *ctx)
{
int i;
- uint8_t num_ways = 0;
+ uint32_t num_ways = 0;
uint32_t mall_ss_size_bytes = 0;
mall_ss_size_bytes = ctx->bw_ctx.bw.dcn.mall_ss_size_bytes;
@@ -1189,7 +1189,8 @@ static uint32_t dcn401_calculate_cab_allocation(struct dc *dc, struct dc_state *
bool dcn401_apply_idle_power_optimizations(struct dc *dc, bool enable)
{
union dmub_rb_cmd cmd;
- uint8_t ways, i;
+ uint32_t ways;
+ uint8_t i;
int j;
bool mall_ss_unsupported = false;
struct dc_plane_state *plane = NULL;
@@ -1242,7 +1243,7 @@ bool dcn401_apply_idle_power_optimizations(struct dc *dc, bool enable)
}
if (ways <= dc->caps.cache_num_ways && !mall_ss_unsupported) {
cmd.cab.header.sub_type = DMUB_CMD__CAB_DCN_SS_FIT_IN_CAB;
- cmd.cab.cab_alloc_ways = ways;
+ cmd.cab.cab_alloc_ways = (uint8_t)ways;
DC_LOG_MALL("cab allocation: %d ways. CAB action: DCN_SS_FIT_IN_CAB\n", ways);
} else {
cmd.cab.header.sub_type = DMUB_CMD__CAB_DCN_SS_NOT_FIT_IN_CAB;
@@ -1433,12 +1434,15 @@ void dcn401_dmub_hw_control_lock_fast(union block_sequence_params *params)
void dcn401_fams2_update_config(struct dc *dc, struct dc_state *context, bool enable)
{
bool fams2_info_required;
+ bool fams2_enabled;
+ bool fams2_legacy_no_fams2;
if (!dc->ctx || !dc->ctx->dmub_srv || !dc->debug.fams2_config.bits.enable)
return;
- fams2_info_required = context->bw_ctx.bw.dcn.fams2_global_config.features.bits.enable;
- fams2_info_required |= context->bw_ctx.bw.dcn.fams2_global_config.features.bits.legacy_method_no_fams2;
+ fams2_enabled = context->bw_ctx.bw.dcn.fams2_global_config.features.bits.enable != 0u;
+ fams2_legacy_no_fams2 = context->bw_ctx.bw.dcn.fams2_global_config.features.bits.legacy_method_no_fams2 != 0u;
+ fams2_info_required = fams2_enabled || fams2_legacy_no_fams2;
dc_dmub_srv_fams2_update_config(dc, context, enable && fams2_info_required);
}
@@ -1470,7 +1474,7 @@ static void update_dsc_for_odm_change(struct dc *dc, struct dc_state *context,
if (otg_master->stream_res.dsc)
dcn32_update_dsc_on_stream(otg_master,
- otg_master->stream->timing.flags.DSC);
+ otg_master->stream->timing.flags.DSC != 0u);
if (old_otg_master && old_otg_master->stream_res.dsc) {
for (i = 0; i < old_opp_head_count; i++) {
old_pipe = old_opp_heads[i];
@@ -3297,7 +3301,7 @@ void dcn401_setup_gsl_group_as_lock_sequence(
group_idx = find_free_gsl_group(dc);
ASSERT(group_idx != 0);
- pipe_ctx->stream_res.gsl_group = group_idx;
+ pipe_ctx->stream_res.gsl_group = (uint8_t)group_idx;
/* set gsl group reg field and mark resource used */
switch (group_idx) {
diff --git a/drivers/gpu/drm/amd/display/dc/inc/bw_fixed.h b/drivers/gpu/drm/amd/display/dc/inc/bw_fixed.h
index d1656c9d50df..d567d4bd585d 100644
--- a/drivers/gpu/drm/amd/display/dc/inc/bw_fixed.h
+++ b/drivers/gpu/drm/amd/display/dc/inc/bw_fixed.h
@@ -79,7 +79,7 @@ static inline struct bw_fixed bw_int_to_fixed(int64_t value)
static inline int32_t bw_fixed_to_int(struct bw_fixed value)
{
- return BW_FIXED_GET_INTEGER_PART(value.value);
+ return (int32_t)BW_FIXED_GET_INTEGER_PART(value.value);
}
struct bw_fixed bw_frc_to_fixed(int64_t num, int64_t denum);
diff --git a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
index 060460abc377..ae6ed3a52d53 100644
--- a/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
+++ b/drivers/gpu/drm/amd/display/dc/link/accessories/link_dp_cts.c
@@ -199,7 +199,6 @@ static void dp_test_get_audio_test_data(struct dc_link *link, bool disable_video
unsigned int channel_count;
unsigned int channel = 0;
unsigned int modes = 0;
- unsigned int sampling_rate_in_hz = 0;
// get audio test mode and test pattern parameters
core_link_read_dpcd(
@@ -232,38 +231,10 @@ static void dp_test_get_audio_test_data(struct dc_link *link, bool disable_video
}
}
- // translate sampling rate
- switch (dpcd_test_mode.bits.sampling_rate) {
- case AUDIO_SAMPLING_RATE_32KHZ:
- sampling_rate_in_hz = 32000;
- break;
- case AUDIO_SAMPLING_RATE_44_1KHZ:
- sampling_rate_in_hz = 44100;
- break;
- case AUDIO_SAMPLING_RATE_48KHZ:
- sampling_rate_in_hz = 48000;
- break;
- case AUDIO_SAMPLING_RATE_88_2KHZ:
- sampling_rate_in_hz = 88200;
- break;
- case AUDIO_SAMPLING_RATE_96KHZ:
- sampling_rate_in_hz = 96000;
- break;
- case AUDIO_SAMPLING_RATE_176_4KHZ:
- sampling_rate_in_hz = 176400;
- break;
- case AUDIO_SAMPLING_RATE_192KHZ:
- sampling_rate_in_hz = 192000;
- break;
- default:
- sampling_rate_in_hz = 0;
- break;
- }
-
link->audio_test_data.flags.test_requested = 1;
link->audio_test_data.flags.disable_video = disable_video;
- link->audio_test_data.sampling_rate = sampling_rate_in_hz;
- link->audio_test_data.channel_count = channel_count;
+ link->audio_test_data.sampling_rate = (uint8_t)dpcd_test_mode.bits.sampling_rate;
+ link->audio_test_data.channel_count = (uint8_t)channel_count;
link->audio_test_data.pattern_type = test_pattern;
if (test_pattern == DP_TEST_PATTERN_AUDIO_SAWTOOTH) {
@@ -885,7 +856,7 @@ bool dp_set_test_pattern(
struct dmub_hw_lock_inst_flags inst_flags = { 0 };
hw_locks.bits.lock_dig = 1;
- inst_flags.dig_inst = pipe_ctx->stream_res.tg->inst;
+ inst_flags.dig_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
dmub_hw_lock_mgr_cmd(link->ctx->dmub_srv,
true,
@@ -933,7 +904,7 @@ bool dp_set_test_pattern(
struct dmub_hw_lock_inst_flags inst_flags = { 0 };
hw_locks.bits.lock_dig = 1;
- inst_flags.dig_inst = pipe_ctx->stream_res.tg->inst;
+ inst_flags.dig_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
dmub_hw_lock_mgr_cmd(link->ctx->dmub_srv,
false,
diff --git a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
index dbbedeeed298..04df75114dd5 100644
--- a/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
+++ b/drivers/gpu/drm/amd/display/dc/link/hwss/link_hwss_hpo_dp.c
@@ -63,7 +63,7 @@ void set_hpo_dp_hblank_min_symbol_width(struct pipe_ctx *pipe_ctx,
time_slot_in_ms = dc_fixpt_from_fraction(32 * 4, link_bw_in_kbps);
mtp_cnt_per_h_blank = dc_fixpt_div(h_blank_in_ms,
dc_fixpt_mul_int(time_slot_in_ms, 64));
- hblank_min_symbol_width = dc_fixpt_floor(
+ hblank_min_symbol_width = (uint16_t)dc_fixpt_floor(
dc_fixpt_mul(mtp_cnt_per_h_blank, throttled_vcp_size));
}
@@ -98,7 +98,7 @@ void setup_hpo_dp_stream_attribute(struct pipe_ctx *pipe_ctx)
&stream->timing,
stream->output_color_space,
stream->use_vsc_sdp_for_colorimetry,
- stream->timing.flags.DSC,
+ (stream->timing.flags.DSC != 0),
false);
link->dc->link_srv->dp_trace_source_sequence(link,
DPCD_SOURCE_SEQ_AFTER_DP_STREAM_ATTR);
diff --git a/drivers/gpu/drm/amd/display/dc/link/link_detection.c b/drivers/gpu/drm/amd/display/dc/link/link_detection.c
index 794dd6a95918..7924fe4ab3a5 100644
--- a/drivers/gpu/drm/amd/display/dc/link/link_detection.c
+++ b/drivers/gpu/drm/amd/display/dc/link/link_detection.c
@@ -290,12 +290,12 @@ static bool i2c_read(
struct i2c_payload payloads[2] = {
{
.write = true,
- .address = address,
+ .address = (uint8_t)address,
.length = 1,
.data = &offs_data },
{
.write = false,
- .address = address,
+ .address = (uint8_t)address,
.length = len,
.data = buffer } };
diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
index e12c25896364..f7cc419cfbff 100644
--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
@@ -557,7 +557,7 @@ static void update_psp_stream_config(struct pipe_ctx *pipe_ctx, bool dpms_off)
/* link encoder index */
config.link_enc_idx = link_enc->transmitter - TRANSMITTER_UNIPHY_A;
if (dp_is_128b_132b_signal(pipe_ctx))
- config.link_enc_idx = pipe_ctx->link_res.hpo_dp_link_enc->inst;
+ config.link_enc_idx = (uint8_t)pipe_ctx->link_res.hpo_dp_link_enc->inst;
/* dio output index is dpia index for DPIA endpoint & dcio index by default */
if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
@@ -1411,7 +1411,7 @@ static bool write_128b_132b_sst_payload_allocation_table(
if (allocate) {
avg_time_slots_per_mtp = link_calculate_sst_avg_time_slots_per_mtp(stream, link);
- req_slot_count = dc_fixpt_ceil(avg_time_slots_per_mtp);
+ req_slot_count = (uint8_t)dc_fixpt_ceil(avg_time_slots_per_mtp);
/// Validation should filter out modes that exceed link BW
ASSERT(req_slot_count <= MAX_MTP_SLOT_COUNT);
if (req_slot_count > MAX_MTP_SLOT_COUNT)
@@ -1811,7 +1811,7 @@ static void enable_link_hdmi(struct pipe_ctx *pipe_ctx)
write_scdc_data(
stream->link->ddc,
stream->phy_pix_clk,
- stream->timing.flags.LTE_340MCSC_SCRAMBLE);
+ (stream->timing.flags.LTE_340MCSC_SCRAMBLE != 0));
memset(&stream->link->cur_link_settings, 0,
sizeof(struct dc_link_settings));
diff --git a/drivers/gpu/drm/amd/display/dc/link/link_factory.c b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
index 9912615d742f..765b731a12a4 100644
--- a/drivers/gpu/drm/amd/display/dc/link/link_factory.c
+++ b/drivers/gpu/drm/amd/display/dc/link/link_factory.c
@@ -516,7 +516,7 @@ static bool construct_phy(struct dc_link *link,
sizeof(struct dc_link_settings));
link->link_id =
- bios->funcs->get_connector_id(bios, init_params->connector_index);
+ bios->funcs->get_connector_id(bios, (uint8_t)init_params->connector_index);
link->ep_type = DISPLAY_ENDPOINT_PHY;
@@ -544,7 +544,7 @@ static bool construct_phy(struct dc_link *link,
if (bios->funcs->get_disp_connector_caps_info) {
bios->funcs->get_disp_connector_caps_info(bios, link->link_id, &disp_connect_caps_info);
- link->is_internal_display = disp_connect_caps_info.INTERNAL_DISPLAY;
+ link->is_internal_display = (disp_connect_caps_info.INTERNAL_DISPLAY != 0);
DC_LOG_DC("BIOS object table - is_internal_display: %d", link->is_internal_display);
}
@@ -895,7 +895,7 @@ static bool construct_dpia(struct dc_link *link,
}
/* Set dpia port index : 0 to number of dpia ports */
- link->ddc_hw_inst = init_params->connector_index;
+ link->ddc_hw_inst = (uint8_t)init_params->connector_index;
// Assign Dpia preferred eng_id
if (link->dc->res_pool->funcs->get_preferred_eng_id_dpia)
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_ddc.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_ddc.c
index 5d2bcce2f669..ef9306686b14 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_ddc.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_ddc.c
@@ -96,7 +96,7 @@ static void i2c_payloads_add(
for (pos = 0; pos < len; pos += payload_size) {
struct i2c_payload payload = {
.write = write,
- .address = address,
+ .address = (uint8_t)address,
.length = DDC_MIN(payload_size, len - pos),
.data = data + pos };
dal_vector_append(&payloads->payloads, &payload);
@@ -384,8 +384,7 @@ bool link_query_ddc_data(
i2c_payloads_add(
&payloads, address, read_size, read_buf, false);
- command.number_of_payloads =
- i2c_payloads_get_count(&payloads);
+ command.number_of_payloads = (uint8_t)i2c_payloads_get_count(&payloads);
success = dm_helpers_submit_i2c(
ddc->ctx,
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
index 782a45caa13d..01b3d56cdc89 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
@@ -1713,7 +1713,7 @@ enum dc_status dp_retrieve_lttpr_cap(struct dc_link *link)
CONN_DATA_DETECT(link, lttpr_dpcd_data, sizeof(lttpr_dpcd_data), "LTTPR Caps: ");
// Identify closest LTTPR to determine if workarounds required for known embedded LTTPR
- closest_lttpr_offset = dp_get_closest_lttpr_offset(lttpr_count);
+ closest_lttpr_offset = dp_get_closest_lttpr_offset((uint8_t)lttpr_count);
core_link_read_dpcd(link, (DP_LTTPR_IEEE_OUI + closest_lttpr_offset),
link->dpcd_caps.lttpr_caps.lttpr_ieee_oui, sizeof(link->dpcd_caps.lttpr_caps.lttpr_ieee_oui));
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
index c958d3f600c8..6406fe890850 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
@@ -112,7 +112,7 @@ static int get_estimated_bw(struct dc_link *link)
return bw_estimated_bw * (Kbps_TO_Gbps / link->dpia_bw_alloc_config.bw_granularity);
}
-static int get_non_reduced_max_link_rate(struct dc_link *link)
+static uint8_t get_non_reduced_max_link_rate(struct dc_link *link)
{
uint8_t nrd_max_link_rate = 0;
@@ -125,7 +125,7 @@ static int get_non_reduced_max_link_rate(struct dc_link *link)
return nrd_max_link_rate;
}
-static int get_non_reduced_max_lane_count(struct dc_link *link)
+static uint8_t get_non_reduced_max_lane_count(struct dc_link *link)
{
uint8_t nrd_max_lane_count = 0;
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c
index 96afce4ffbfa..e1991776c59d 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c
@@ -60,11 +60,10 @@ static void dp_pr_set_static_screen_param(struct dc_link *link)
if (dc->current_state->res_ctx.pipe_ctx[i].stream &&
dc->current_state->res_ctx.pipe_ctx[i].stream->link == link) {
struct dc_stream_state *stream = dc->current_state->res_ctx.pipe_ctx[i].stream;
- unsigned int vsync_rate_hz = div64_u64(div64_u64(
- (stream->timing.pix_clk_100hz * (u64)100),
- stream->timing.v_total),
- stream->timing.h_total);
-
+ unsigned int vsync_rate_hz = (unsigned int)div64_u64(div64_u64(
+ (stream->timing.pix_clk_100hz * (u64)100),
+ stream->timing.v_total),
+ stream->timing.h_total);
params.triggers.cursor_update = true;
params.triggers.overlay_update = true;
params.triggers.surface_update = true;
@@ -264,7 +263,7 @@ bool dp_pr_enable(struct dc_link *link, bool enable)
cmd.pr_enable.header.type = DMUB_CMD__PR;
cmd.pr_enable.header.sub_type = DMUB_CMD__PR_ENABLE;
cmd.pr_enable.header.payload_bytes = sizeof(struct dmub_cmd_pr_enable_data);
- cmd.pr_enable.data.panel_inst = panel_inst;
+ cmd.pr_enable.data.panel_inst = (uint8_t)panel_inst;
cmd.pr_enable.data.enable = enable ? 1 : 0;
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
@@ -301,17 +300,17 @@ bool dp_pr_copy_settings(struct dc_link *link, struct replay_context *replay_con
cmd.pr_copy_settings.header.type = DMUB_CMD__PR;
cmd.pr_copy_settings.header.sub_type = DMUB_CMD__PR_COPY_SETTINGS;
cmd.pr_copy_settings.header.payload_bytes = sizeof(struct dmub_cmd_pr_copy_settings_data);
- cmd.pr_copy_settings.data.panel_inst = panel_inst;
+ cmd.pr_copy_settings.data.panel_inst = (uint8_t)panel_inst;
// HW inst
cmd.pr_copy_settings.data.aux_inst = replay_context->aux_inst;
cmd.pr_copy_settings.data.digbe_inst = replay_context->digbe_inst;
cmd.pr_copy_settings.data.digfe_inst = replay_context->digfe_inst;
if (pipe_ctx->plane_res.dpp)
- cmd.pr_copy_settings.data.dpp_inst = pipe_ctx->plane_res.dpp->inst;
+ cmd.pr_copy_settings.data.dpp_inst = (uint8_t)pipe_ctx->plane_res.dpp->inst;
else
cmd.pr_copy_settings.data.dpp_inst = 0;
if (pipe_ctx->stream_res.tg)
- cmd.pr_copy_settings.data.otg_inst = pipe_ctx->stream_res.tg->inst;
+ cmd.pr_copy_settings.data.otg_inst = (uint8_t)pipe_ctx->stream_res.tg->inst;
else
cmd.pr_copy_settings.data.otg_inst = 0;
@@ -358,7 +357,7 @@ bool dp_pr_update_state(struct dc_link *link, struct dmub_cmd_pr_update_state_da
cmd.pr_update_state.header.type = DMUB_CMD__PR;
cmd.pr_update_state.header.sub_type = DMUB_CMD__PR_UPDATE_STATE;
cmd.pr_update_state.header.payload_bytes = sizeof(struct dmub_cmd_pr_update_state_data);
- cmd.pr_update_state.data.panel_inst = panel_inst;
+ cmd.pr_update_state.data.panel_inst = (uint8_t)panel_inst;
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
return true;
@@ -379,7 +378,7 @@ bool dp_pr_set_general_cmd(struct dc_link *link, struct dmub_cmd_pr_general_cmd_
cmd.pr_general_cmd.header.type = DMUB_CMD__PR;
cmd.pr_general_cmd.header.sub_type = DMUB_CMD__PR_GENERAL_CMD;
cmd.pr_general_cmd.header.payload_bytes = sizeof(struct dmub_cmd_pr_general_cmd_data);
- cmd.pr_general_cmd.data.panel_inst = panel_inst;
+ cmd.pr_general_cmd.data.panel_inst = (uint8_t)panel_inst;
dc_wake_and_execute_dmub_cmd(dc->ctx, &cmd, DM_DMUB_WAIT_TYPE_WAIT);
return true;
@@ -397,7 +396,7 @@ bool dp_pr_get_state(const struct dc_link *link, uint64_t *state)
do {
// Send gpint command and wait for ack
- if (!dc_wake_and_execute_gpint(dc->ctx, DMUB_GPINT__GET_REPLAY_STATE, panel_inst,
+ if (!dc_wake_and_execute_gpint(dc->ctx, DMUB_GPINT__GET_REPLAY_STATE, (uint16_t)panel_inst,
&replay_state, DM_DMUB_WAIT_TYPE_WAIT_WITH_REPLY)) {
// Return invalid state when GPINT times out
replay_state = PR_STATE_INVALID;
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
index 66d0fb1b9b9d..4331e032416f 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_training_8b_10b.c
@@ -132,7 +132,7 @@ void decide_8b_10b_training_settings(
*/
lt_settings->link_settings.link_spread = link->dp_ss_off ?
LINK_SPREAD_DISABLED : LINK_SPREAD_05_DOWNSPREAD_30KHZ;
- lt_settings->eq_pattern_time = get_eq_training_aux_rd_interval(link, link_setting);
+ lt_settings->eq_pattern_time = (uint16_t)get_eq_training_aux_rd_interval(link, link_setting);
lt_settings->pattern_for_cr = decide_cr_training_pattern(link_setting);
lt_settings->pattern_for_eq = decide_eq_training_pattern(link, link_res, link_setting);
lt_settings->enhanced_framing = 1;
@@ -140,7 +140,7 @@ void decide_8b_10b_training_settings(
lt_settings->disallow_per_lane_settings = true;
lt_settings->always_match_dpcd_with_hw_lane_settings = true;
lt_settings->lttpr_mode = dp_decide_8b_10b_lttpr_mode(link);
- lt_settings->cr_pattern_time = get_cr_training_aux_rd_interval(link, link_setting, lt_settings->lttpr_mode);
+ lt_settings->cr_pattern_time = (uint16_t)get_cr_training_aux_rd_interval(link, link_setting, lt_settings->lttpr_mode);
dp_hw_to_dpcd_lane_settings(lt_settings, lt_settings->hw_lane_settings, lt_settings->dpcd_lane_settings);
/* Some embedded LTTPRs rely on receiving TPS2 before LT to interop reliably with sensitive VGA dongles
@@ -195,7 +195,7 @@ static void set_link_settings_and_perform_early_tps2_retimer_pre_lt_sequence(str
* 6. Begin link training as usual
* */
- uint32_t closest_lttpr_address_offset = dp_get_closest_lttpr_offset(lttpr_count);
+ uint32_t closest_lttpr_address_offset = dp_get_closest_lttpr_offset((uint8_t)lttpr_count);
union dpcd_training_pattern dpcd_pattern = {0};
@@ -379,7 +379,7 @@ enum link_training_result perform_8b_10b_channel_equalization_sequence(
dpcd_set_lane_settings(link, lt_settings, offset);
/* 3. wait for receiver to lock-on*/
- wait_time_microsec = dp_get_eq_aux_rd_interval(link, lt_settings, offset, retries_ch_eq);
+ wait_time_microsec = dp_get_eq_aux_rd_interval(link, lt_settings, offset, (uint8_t)retries_ch_eq);
dp_wait_for_training_aux_rd_interval(
link,
@@ -408,7 +408,7 @@ enum link_training_result perform_8b_10b_channel_equalization_sequence(
/* 6. check CHEQ done*/
if (dp_is_ch_eq_done(lane_count, dpcd_lane_status) &&
dp_is_symbol_locked(lane_count, dpcd_lane_status) &&
- dp_check_interlane_aligned(dpcd_lane_status_updated, link, retries_ch_eq))
+ dp_check_interlane_aligned(dpcd_lane_status_updated, link, (uint8_t)retries_ch_eq))
return LINK_TRAINING_SUCCESS;
/* 7. update VS/PE/PC2 in lt_settings*/
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
index 4ae739dd9c7e..6aa65815af22 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_edp_panel_control.c
@@ -601,12 +601,12 @@ bool edp_set_psr_allow_active(struct dc_link *link, const bool *allow_active,
link->psr_settings.psr_power_opt = *power_opts;
if (psr != NULL && link->psr_settings.psr_feature_enabled && psr->funcs->psr_set_power_opt)
- psr->funcs->psr_set_power_opt(psr, link->psr_settings.psr_power_opt, panel_inst);
+ psr->funcs->psr_set_power_opt(psr, link->psr_settings.psr_power_opt, (uint8_t)panel_inst);
}
if (psr != NULL && link->psr_settings.psr_feature_enabled &&
force_static && psr->funcs->psr_force_static)
- psr->funcs->psr_force_static(psr, panel_inst);
+ psr->funcs->psr_force_static(psr, (uint8_t)panel_inst);
/* Enable or Disable PSR */
if (allow_active && link->psr_settings.psr_allow_active != *allow_active) {
@@ -615,9 +615,9 @@ bool edp_set_psr_allow_active(struct dc_link *link, const bool *allow_active,
if (!link->psr_settings.psr_allow_active)
dc_z10_restore(dc);
- if (psr != NULL && link->psr_settings.psr_feature_enabled) {
- psr->funcs->psr_enable(psr, link->psr_settings.psr_allow_active, wait, panel_inst);
- } else if ((dmcu != NULL && dmcu->funcs->is_dmcu_initialized(dmcu)) &&
+ if (psr != NULL && link->psr_settings.psr_feature_enabled)
+ psr->funcs->psr_enable(psr, link->psr_settings.psr_allow_active, wait, (uint8_t)panel_inst);
+ else if ((dmcu != NULL && dmcu->funcs->is_dmcu_initialized(dmcu)) &&
link->psr_settings.psr_feature_enabled)
dmcu->funcs->set_psr_enable(dmcu, link->psr_settings.psr_allow_active, wait);
else
@@ -637,7 +637,7 @@ bool edp_get_psr_state(const struct dc_link *link, enum dc_psr_state *state)
return false;
if (psr != NULL && link->psr_settings.psr_feature_enabled)
- psr->funcs->psr_get_state(psr, state, panel_inst);
+ psr->funcs->psr_get_state(psr, state, (uint8_t)panel_inst);
else if (dmcu != NULL && link->psr_settings.psr_feature_enabled)
dmcu->funcs->get_psr_state(dmcu, state);
@@ -811,7 +811,7 @@ bool edp_setup_psr(struct dc_link *link,
psr_context->smuPhyId = transmitter_to_phy_id(link);
psr_context->crtcTimingVerticalTotal = stream->timing.v_total;
- psr_context->vsync_rate_hz = div64_u64(div64_u64((stream->
+ psr_context->vsync_rate_hz = (unsigned int)div64_u64(div64_u64((stream->
timing.pix_clk_100hz * (u64)100),
stream->timing.v_total),
stream->timing.h_total);
@@ -885,7 +885,7 @@ bool edp_setup_psr(struct dc_link *link,
if (psr) {
link->psr_settings.psr_feature_enabled = psr->funcs->psr_copy_settings(psr,
- link, psr_context, panel_inst);
+ link, psr_context, (uint8_t)panel_inst);
link->psr_settings.psr_power_opt = 0;
link->psr_settings.psr_allow_active = 0;
} else {
@@ -913,7 +913,7 @@ void edp_get_psr_residency(const struct dc_link *link, uint32_t *residency, enum
// PSR residency measurements only supported on DMCUB
if (psr != NULL && link->psr_settings.psr_feature_enabled)
- psr->funcs->psr_get_residency(psr, residency, panel_inst, mode);
+ psr->funcs->psr_get_residency(psr, residency, (uint8_t)panel_inst, mode);
else
*residency = 0;
}
@@ -947,7 +947,7 @@ bool edp_set_replay_allow_active(struct dc_link *link, const bool *allow_active,
if (power_opts && link->replay_settings.replay_power_opt_active != *power_opts) {
if (replay != NULL && link->replay_settings.replay_feature_enabled &&
replay->funcs->replay_set_power_opt) {
- replay->funcs->replay_set_power_opt(replay, *power_opts, panel_inst);
+ replay->funcs->replay_set_power_opt(replay, *power_opts, (uint8_t)panel_inst);
link->replay_settings.replay_power_opt_active = *power_opts;
}
}
@@ -957,7 +957,7 @@ bool edp_set_replay_allow_active(struct dc_link *link, const bool *allow_active,
// TODO: Handle mux change case if force_static is set
// If force_static is set, just change the replay_allow_active state directly
if (replay != NULL && link->replay_settings.replay_feature_enabled)
- replay->funcs->replay_enable(replay, *allow_active, wait, panel_inst);
+ replay->funcs->replay_enable(replay, *allow_active, wait, (uint8_t)panel_inst);
link->replay_settings.replay_allow_active = *allow_active;
}
@@ -975,7 +975,7 @@ bool edp_get_replay_state(const struct dc_link *link, uint64_t *state)
return false;
if (replay != NULL && link->replay_settings.replay_feature_enabled)
- replay->funcs->replay_get_state(replay, &pr_state, panel_inst);
+ replay->funcs->replay_get_state(replay, &pr_state, (uint8_t)panel_inst);
*state = pr_state;
return true;
@@ -1046,7 +1046,7 @@ bool edp_setup_freesync_replay(struct dc_link *link, const struct dc_stream_stat
replay_context.os_request_force_ffu = link->replay_settings.config.os_request_force_ffu;
link->replay_settings.replay_feature_enabled =
- replay->funcs->replay_copy_settings(replay, link, &replay_context, panel_inst);
+ replay->funcs->replay_copy_settings(replay, link, &replay_context, (uint8_t)panel_inst);
if (link->replay_settings.replay_feature_enabled) {
replay_config.bits.FREESYNC_PANEL_REPLAY_MODE = 1;
@@ -1095,7 +1095,7 @@ bool edp_send_replay_cmd(struct dc_link *link,
return false;
if (dp_pr_get_panel_inst(dc, link, &panel_inst))
- cmd_data->panel_inst = panel_inst;
+ cmd_data->panel_inst = (uint8_t)panel_inst;
else {
DC_LOG_DC("%s(): get edp panel inst fail ", __func__);
return false;
@@ -1120,7 +1120,7 @@ bool edp_set_coasting_vtotal(struct dc_link *link, uint32_t coasting_vtotal, uin
if (coasting_vtotal && (link->replay_settings.coasting_vtotal != coasting_vtotal ||
link->replay_settings.frame_skip_number != frame_skip_number)) {
- replay->funcs->replay_set_coasting_vtotal(replay, coasting_vtotal, panel_inst, frame_skip_number);
+ replay->funcs->replay_set_coasting_vtotal(replay, coasting_vtotal, (uint8_t)panel_inst, frame_skip_number);
link->replay_settings.coasting_vtotal = coasting_vtotal;
link->replay_settings.frame_skip_number = frame_skip_number;
}
@@ -1142,7 +1142,7 @@ bool edp_replay_residency(const struct dc_link *link,
return false;
if (replay != NULL && link->replay_settings.replay_feature_enabled)
- replay->funcs->replay_residency(replay, panel_inst, residency, is_start, mode);
+ replay->funcs->replay_residency(replay, (uint8_t)panel_inst, residency, is_start, mode);
else
*residency = 0;
@@ -1167,7 +1167,7 @@ bool edp_set_replay_power_opt_and_coasting_vtotal(struct dc_link *link,
if (link->replay_settings.replay_feature_enabled &&
replay->funcs->replay_set_power_opt_and_coasting_vtotal) {
replay->funcs->replay_set_power_opt_and_coasting_vtotal(replay,
- *power_opts, panel_inst, coasting_vtotal, frame_skip_number);
+ *power_opts, (uint8_t)panel_inst, coasting_vtotal, frame_skip_number);
link->replay_settings.replay_power_opt_active = *power_opts;
link->replay_settings.coasting_vtotal = coasting_vtotal;
link->replay_settings.frame_skip_number = frame_skip_number;
@@ -1251,10 +1251,10 @@ static void edp_set_assr_enable(const struct dc *pDC, struct dc_link *link,
memset(&cmd, 0, sizeof(cmd));
- link_enc_index = link->link_enc->transmitter - TRANSMITTER_UNIPHY_A;
+ link_enc_index = (uint8_t)(link->link_enc->transmitter - TRANSMITTER_UNIPHY_A);
if (link_res->hpo_dp_link_enc) {
- link_enc_index = link_res->hpo_dp_link_enc->inst;
+ link_enc_index = (uint8_t)link_res->hpo_dp_link_enc->inst;
use_hpo_dp_link_enc = true;
}
diff --git a/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn20/dcn20_mmhubbub.c b/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn20/dcn20_mmhubbub.c
index 2a422e223bf2..2e0f07ec04e4 100644
--- a/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn20/dcn20_mmhubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn20/dcn20_mmhubbub.c
@@ -40,8 +40,8 @@
#define FN(reg_name, field_name) \
mcif_wb20->mcif_wb_shift->field_name, mcif_wb20->mcif_wb_mask->field_name
-#define MCIF_ADDR(addr) (((unsigned long long)addr & 0xffffffffff) + 0xFE) >> 8
-#define MCIF_ADDR_HIGH(addr) (unsigned long long)addr >> 40
+#define MCIF_ADDR(addr) ((uint32_t)((((unsigned long long)(addr) & 0xffffffffffULL) + 0xFEULL) >> 8))
+#define MCIF_ADDR_HIGH(addr) ((uint32_t)(((unsigned long long)(addr)) >> 40))
/* wbif programming guide:
* 1. set up wbif parameter:
diff --git a/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn32/dcn32_mmhubbub.c b/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn32/dcn32_mmhubbub.c
index c3b089ba511a..6b6f80a3bfd5 100644
--- a/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn32/dcn32_mmhubbub.c
+++ b/drivers/gpu/drm/amd/display/dc/mmhubbub/dcn32/dcn32_mmhubbub.c
@@ -40,8 +40,8 @@
#define FN(reg_name, field_name) \
mcif_wb30->mcif_wb_shift->field_name, mcif_wb30->mcif_wb_mask->field_name
-#define MCIF_ADDR(addr) (((unsigned long long)addr & 0xffffffffff) + 0xFE) >> 8
-#define MCIF_ADDR_HIGH(addr) (unsigned long long)addr >> 40
+#define MCIF_ADDR(addr) ((uint32_t)((((unsigned long long)(addr) & 0xffffffffffULL) + 0xFEULL) >> 8))
+#define MCIF_ADDR_HIGH(addr) ((uint32_t)(((unsigned long long)(addr)) >> 40))
/* wbif programming guide:
* 1. set up wbif parameter:
diff --git a/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c b/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c
index 39ce4d4a61a1..c558b1d633f3 100644
--- a/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c
+++ b/drivers/gpu/drm/amd/display/dc/optc/dcn20/dcn20_optc.c
@@ -305,8 +305,8 @@ static void optc2_align_vblanks(
L = div_u64(L, master_h_total);
L = div_u64(L, slave_pixel_clock_100Hz);
XY = div_u64(L, p);
- Y = master_v_active - XY - 1;
- X = div_u64(((XY + 1) * p - L) * master_h_total, p * master_clock_divider);
+ Y = (uint32_t)(master_v_active - XY - 1);
+ X = (uint32_t)div_u64(((XY + 1) * p - L) * master_h_total, p * master_clock_divider);
/*
* set master OTG to unlock when V/H
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dce110/dce110_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dce110/dce110_resource.c
index ee2877cf27c5..0138868e198b 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dce110/dce110_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dce110/dce110_resource.c
@@ -1151,7 +1151,7 @@ static struct pipe_ctx *dce110_acquire_underlay(
/*pipe_ctx->plane_res.ipp = res_ctx->pool->ipps[underlay_idx];*/
pipe_ctx->plane_res.xfm = pool->transforms[underlay_idx];
pipe_ctx->stream_res.opp = pool->opps[underlay_idx];
- pipe_ctx->pipe_idx = underlay_idx;
+ pipe_ctx->pipe_idx = (uint8_t)underlay_idx;
pipe_ctx->stream = stream;
@@ -1161,7 +1161,7 @@ static struct pipe_ctx *dce110_acquire_underlay(
hws->funcs.enable_display_power_gating(
dc,
- pipe_ctx->stream_res.tg->inst,
+ (uint8_t)pipe_ctx->stream_res.tg->inst,
dcb, PIPE_GATING_CONTROL_DISABLE);
/*
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn10/dcn10_resource.c
index 8fc8d441a592..b7bd7344065b 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn10/dcn10_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn10/dcn10_resource.c
@@ -1141,7 +1141,7 @@ static struct pipe_ctx *dcn10_acquire_free_pipe_for_layer(
idle_pipe->plane_res.hubp = pool->hubps[idle_pipe->pipe_idx];
idle_pipe->plane_res.ipp = pool->ipps[idle_pipe->pipe_idx];
idle_pipe->plane_res.dpp = pool->dpps[idle_pipe->pipe_idx];
- idle_pipe->plane_res.mpcc_inst = pool->dpps[idle_pipe->pipe_idx]->inst;
+ idle_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[idle_pipe->pipe_idx]->inst;
return idle_pipe;
}
@@ -1737,7 +1737,7 @@ struct resource_pool *dcn10_create_resource_pool(
if (!pool)
return NULL;
- if (dcn10_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn10_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
kfree(pool);
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
index cb155a0f1c30..038798808e52 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
@@ -1523,13 +1523,13 @@ bool dcn20_split_stream_for_odm(
*next_odm_pipe = *prev_odm_pipe;
- next_odm_pipe->pipe_idx = pipe_idx;
+ next_odm_pipe->pipe_idx = (uint8_t)pipe_idx;
next_odm_pipe->plane_res.mi = pool->mis[next_odm_pipe->pipe_idx];
next_odm_pipe->plane_res.hubp = pool->hubps[next_odm_pipe->pipe_idx];
next_odm_pipe->plane_res.ipp = pool->ipps[next_odm_pipe->pipe_idx];
next_odm_pipe->plane_res.xfm = pool->transforms[next_odm_pipe->pipe_idx];
next_odm_pipe->plane_res.dpp = pool->dpps[next_odm_pipe->pipe_idx];
- next_odm_pipe->plane_res.mpcc_inst = pool->dpps[next_odm_pipe->pipe_idx]->inst;
+ next_odm_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[next_odm_pipe->pipe_idx]->inst;
next_odm_pipe->stream_res.dsc = NULL;
if (prev_odm_pipe->next_odm_pipe && prev_odm_pipe->next_odm_pipe != next_odm_pipe) {
next_odm_pipe->next_odm_pipe = prev_odm_pipe->next_odm_pipe;
@@ -1580,13 +1580,13 @@ void dcn20_split_stream_for_mpc(
*secondary_pipe = *primary_pipe;
secondary_pipe->bottom_pipe = sec_bot_pipe;
- secondary_pipe->pipe_idx = pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)pipe_idx;
secondary_pipe->plane_res.mi = pool->mis[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.hubp = pool->hubps[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.ipp = pool->ipps[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.xfm = pool->transforms[secondary_pipe->pipe_idx];
secondary_pipe->plane_res.dpp = pool->dpps[secondary_pipe->pipe_idx];
- secondary_pipe->plane_res.mpcc_inst = pool->dpps[secondary_pipe->pipe_idx]->inst;
+ secondary_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[secondary_pipe->pipe_idx]->inst;
secondary_pipe->stream_res.dsc = NULL;
if (primary_pipe->bottom_pipe && primary_pipe->bottom_pipe != secondary_pipe) {
ASSERT(!secondary_pipe->bottom_pipe);
@@ -1736,7 +1736,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].next_odm_pipe->pipe_idx;
if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
- secondary_pipe->pipe_idx = preferred_pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)preferred_pipe_idx;
}
}
if (secondary_pipe == NULL &&
@@ -1744,7 +1744,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
preferred_pipe_idx = dc->current_state->res_ctx.pipe_ctx[primary_pipe->pipe_idx].bottom_pipe->pipe_idx;
if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
- secondary_pipe->pipe_idx = preferred_pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)preferred_pipe_idx;
}
}
@@ -1762,7 +1762,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
- secondary_pipe->pipe_idx = preferred_pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)preferred_pipe_idx;
break;
}
}
@@ -1783,7 +1783,7 @@ struct pipe_ctx *dcn20_find_secondary_pipe(struct dc *dc,
if (res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
- secondary_pipe->pipe_idx = preferred_pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)preferred_pipe_idx;
break;
}
}
@@ -2215,7 +2215,7 @@ struct pipe_ctx *dcn20_acquire_free_pipe_for_layer(
sec_dpp_pipe->plane_res.hubp = pool->hubps[sec_dpp_pipe->pipe_idx];
sec_dpp_pipe->plane_res.ipp = pool->ipps[sec_dpp_pipe->pipe_idx];
sec_dpp_pipe->plane_res.dpp = pool->dpps[sec_dpp_pipe->pipe_idx];
- sec_dpp_pipe->plane_res.mpcc_inst = pool->dpps[sec_dpp_pipe->pipe_idx]->inst;
+ sec_dpp_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[sec_dpp_pipe->pipe_idx]->inst;
return sec_dpp_pipe;
}
@@ -2623,7 +2623,7 @@ static bool dcn20_resource_construct(
ranges.num_reader_wm_sets = 0;
if (loaded_bb->num_states == 1) {
- ranges.reader_wm_sets[0].wm_inst = i;
+ ranges.reader_wm_sets[0].wm_inst = (uint8_t)i;
ranges.reader_wm_sets[0].min_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
ranges.reader_wm_sets[0].max_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
ranges.reader_wm_sets[0].min_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
@@ -2632,7 +2632,7 @@ static bool dcn20_resource_construct(
ranges.num_reader_wm_sets = 1;
} else if (loaded_bb->num_states > 1) {
for (i = 0; i < 4 && i < loaded_bb->num_states; i++) {
- ranges.reader_wm_sets[i].wm_inst = i;
+ ranges.reader_wm_sets[i].wm_inst = (uint8_t)i;
ranges.reader_wm_sets[i].min_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
ranges.reader_wm_sets[i].max_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
DC_FP_START();
@@ -2830,7 +2830,7 @@ struct resource_pool *dcn20_create_resource_pool(
if (!pool)
return NULL;
- if (dcn20_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn20_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
index 7066db2ae8fa..89a1931b8d23 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
@@ -1751,7 +1751,7 @@ struct resource_pool *dcn21_create_resource_pool(
if (!pool)
return NULL;
- if (dcn21_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn21_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
index 7de5a2ccf722..baefddd03438 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
@@ -1567,13 +1567,13 @@ static bool dcn30_split_stream_for_mpc_or_odm(
*sec_pipe = *pri_pipe;
- sec_pipe->pipe_idx = pipe_idx;
+ sec_pipe->pipe_idx = (uint8_t)pipe_idx;
sec_pipe->plane_res.mi = pool->mis[pipe_idx];
sec_pipe->plane_res.hubp = pool->hubps[pipe_idx];
sec_pipe->plane_res.ipp = pool->ipps[pipe_idx];
sec_pipe->plane_res.xfm = pool->transforms[pipe_idx];
sec_pipe->plane_res.dpp = pool->dpps[pipe_idx];
- sec_pipe->plane_res.mpcc_inst = pool->dpps[pipe_idx]->inst;
+ sec_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[pipe_idx]->inst;
sec_pipe->stream_res.dsc = NULL;
if (odm) {
if (pri_pipe->next_odm_pipe) {
@@ -1627,7 +1627,7 @@ static struct pipe_ctx *dcn30_find_split_pipe(
if (old_index >= 0 && context->res_ctx.pipe_ctx[old_index].stream == NULL) {
pipe = &context->res_ctx.pipe_ctx[old_index];
- pipe->pipe_idx = old_index;
+ pipe->pipe_idx = (uint8_t)old_index;
}
if (!pipe)
@@ -1636,7 +1636,7 @@ static struct pipe_ctx *dcn30_find_split_pipe(
&& dc->current_state->res_ctx.pipe_ctx[i].prev_odm_pipe == NULL) {
if (context->res_ctx.pipe_ctx[i].stream == NULL) {
pipe = &context->res_ctx.pipe_ctx[i];
- pipe->pipe_idx = i;
+ pipe->pipe_idx = (uint8_t)i;
break;
}
}
@@ -1651,7 +1651,7 @@ static struct pipe_ctx *dcn30_find_split_pipe(
for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) {
if (context->res_ctx.pipe_ctx[i].stream == NULL) {
pipe = &context->res_ctx.pipe_ctx[i];
- pipe->pipe_idx = i;
+ pipe->pipe_idx = (uint8_t)i;
break;
}
}
@@ -2383,7 +2383,7 @@ static bool dcn30_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //3
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut;
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2673,7 +2673,7 @@ struct resource_pool *dcn30_create_resource_pool(
if (!pool)
return NULL;
- if (dcn30_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn30_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
index e8e1ebe33a24..625d9ec713a9 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
@@ -1362,7 +1362,7 @@ static void set_wm_ranges(
struct _vcs_dpi_soc_bounding_box_st *loaded_bb)
{
struct pp_smu_wm_range_sets ranges = {0};
- int i;
+ unsigned int i;
ranges.num_reader_wm_sets = 0;
@@ -1376,7 +1376,7 @@ static void set_wm_ranges(
ranges.num_reader_wm_sets = 1;
} else if (loaded_bb->num_states > 1) {
for (i = 0; i < 4 && i < loaded_bb->num_states; i++) {
- ranges.reader_wm_sets[i].wm_inst = i;
+ ranges.reader_wm_sets[i].wm_inst = (uint8_t)i;
ranges.reader_wm_sets[i].min_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MIN;
ranges.reader_wm_sets[i].max_drain_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX;
DC_FP_START();
@@ -1510,7 +1510,7 @@ static bool dcn301_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut;
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -1777,7 +1777,7 @@ struct resource_pool *dcn301_create_resource_pool(
if (!pool)
return NULL;
- if (dcn301_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn301_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
index 16220b5ed885..6f380363033a 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
@@ -1300,7 +1300,7 @@ static bool dcn302_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->res_cap->num_mpc_3dlut; //3
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->res_cap->num_mpc_3dlut;
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -1560,7 +1560,7 @@ struct resource_pool *dcn302_create_resource_pool(const struct dc_init_data *ini
if (!pool)
return NULL;
- if (dcn302_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn302_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return pool;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
index 5203f659944d..8a7f62ab98b5 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
@@ -1244,7 +1244,7 @@ static bool dcn303_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->res_cap->num_mpc_3dlut; //3
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->res_cap->num_mpc_3dlut;
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -1492,7 +1492,7 @@ struct resource_pool *dcn303_create_resource_pool(const struct dc_init_data *ini
if (!pool)
return NULL;
- if (dcn303_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn303_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return pool;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
index fa4df727f123..649b5e7c0373 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
@@ -1984,7 +1984,7 @@ static bool dcn31_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2276,7 +2276,7 @@ struct resource_pool *dcn31_create_resource_pool(
if (!pool)
return NULL;
- if (dcn31_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn31_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
index 6de67cd1c81b..6a4094663050 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
@@ -1918,7 +1918,7 @@ static bool dcn314_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2195,7 +2195,7 @@ struct resource_pool *dcn314_create_resource_pool(
if (!pool)
return NULL;
- if (dcn314_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn314_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
index 3db969852c5d..1e86a5e4d113 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
@@ -1954,7 +1954,7 @@ static bool dcn315_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut;
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2217,7 +2217,7 @@ struct resource_pool *dcn315_create_resource_pool(
if (!pool)
return NULL;
- if (dcn315_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn315_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
index 41569821e3ab..6369fc90f84b 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
@@ -1830,7 +1830,7 @@ static bool dcn316_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2083,7 +2083,7 @@ struct resource_pool *dcn316_create_resource_pool(
if (!pool)
return NULL;
- if (dcn316_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn316_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
index c2059960a6d9..6f0a3b0ff2d3 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
@@ -2306,7 +2306,7 @@ static bool dcn32_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2623,7 +2623,7 @@ struct resource_pool *dcn32_create_resource_pool(
if (!pool)
return NULL;
- if (dcn32_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn32_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
@@ -2751,7 +2751,7 @@ static struct pipe_ctx *find_idle_secondary_pipe_check_mpo(
if ((res_ctx->pipe_ctx[preferred_pipe_idx].stream == NULL) &&
!(next_odm_mpo_pipe && next_odm_mpo_pipe->pipe_idx == preferred_pipe_idx)) {
secondary_pipe = &res_ctx->pipe_ctx[preferred_pipe_idx];
- secondary_pipe->pipe_idx = preferred_pipe_idx;
+ secondary_pipe->pipe_idx = (uint8_t)preferred_pipe_idx;
}
}
@@ -2764,7 +2764,7 @@ static struct pipe_ctx *find_idle_secondary_pipe_check_mpo(
if ((res_ctx->pipe_ctx[i].stream == NULL) &&
!(next_odm_mpo_pipe && next_odm_mpo_pipe->pipe_idx == i)) {
secondary_pipe = &res_ctx->pipe_ctx[i];
- secondary_pipe->pipe_idx = i;
+ secondary_pipe->pipe_idx = (uint8_t)i;
break;
}
}
@@ -2798,7 +2798,7 @@ static struct pipe_ctx *dcn32_acquire_idle_pipe_for_head_pipe_in_layer(
pipe = &old_ctx->pipe_ctx[head_index];
if (pipe->bottom_pipe && res_ctx->pipe_ctx[pipe->bottom_pipe->pipe_idx].stream == NULL) {
idle_pipe = &res_ctx->pipe_ctx[pipe->bottom_pipe->pipe_idx];
- idle_pipe->pipe_idx = pipe->bottom_pipe->pipe_idx;
+ idle_pipe->pipe_idx = (uint8_t)pipe->bottom_pipe->pipe_idx;
} else {
idle_pipe = find_idle_secondary_pipe_check_mpo(res_ctx, pool, head_pipe);
if (!idle_pipe)
@@ -2812,7 +2812,7 @@ static struct pipe_ctx *dcn32_acquire_idle_pipe_for_head_pipe_in_layer(
idle_pipe->plane_res.hubp = pool->hubps[idle_pipe->pipe_idx];
idle_pipe->plane_res.ipp = pool->ipps[idle_pipe->pipe_idx];
idle_pipe->plane_res.dpp = pool->dpps[idle_pipe->pipe_idx];
- idle_pipe->plane_res.mpcc_inst = pool->dpps[idle_pipe->pipe_idx]->inst;
+ idle_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[idle_pipe->pipe_idx]->inst;
return idle_pipe;
}
@@ -2863,7 +2863,7 @@ struct pipe_ctx *dcn32_acquire_free_pipe_as_secondary_dpp_pipe(
pool, opp_head_pipe);
if (free_pipe_idx >= 0) {
free_pipe = &new_ctx->res_ctx.pipe_ctx[free_pipe_idx];
- free_pipe->pipe_idx = free_pipe_idx;
+ free_pipe->pipe_idx = (uint8_t)free_pipe_idx;
free_pipe->stream = opp_head_pipe->stream;
free_pipe->stream_res.tg = opp_head_pipe->stream_res.tg;
free_pipe->stream_res.opp = opp_head_pipe->stream_res.opp;
@@ -2872,7 +2872,7 @@ struct pipe_ctx *dcn32_acquire_free_pipe_as_secondary_dpp_pipe(
free_pipe->plane_res.ipp = pool->ipps[free_pipe->pipe_idx];
free_pipe->plane_res.dpp = pool->dpps[free_pipe->pipe_idx];
free_pipe->plane_res.mpcc_inst =
- pool->dpps[free_pipe->pipe_idx]->inst;
+ (uint8_t)pool->dpps[free_pipe->pipe_idx]->inst;
} else {
ASSERT(opp_head_pipe);
free_pipe = NULL;
@@ -2894,7 +2894,7 @@ struct pipe_ctx *dcn32_acquire_free_pipe_as_secondary_opp_head(
if (free_pipe_idx >= 0) {
free_pipe = &new_ctx->res_ctx.pipe_ctx[free_pipe_idx];
- free_pipe->pipe_idx = free_pipe_idx;
+ free_pipe->pipe_idx = (uint8_t)free_pipe_idx;
free_pipe->stream = otg_master->stream;
free_pipe->stream_res.tg = otg_master->stream_res.tg;
free_pipe->stream_res.dsc = NULL;
@@ -2904,7 +2904,7 @@ struct pipe_ctx *dcn32_acquire_free_pipe_as_secondary_opp_head(
free_pipe->plane_res.ipp = pool->ipps[free_pipe_idx];
free_pipe->plane_res.xfm = pool->transforms[free_pipe_idx];
free_pipe->plane_res.dpp = pool->dpps[free_pipe_idx];
- free_pipe->plane_res.mpcc_inst = pool->dpps[free_pipe_idx]->inst;
+ free_pipe->plane_res.mpcc_inst = (uint8_t)pool->dpps[free_pipe_idx]->inst;
free_pipe->dsc_padding_params = otg_master->dsc_padding_params;
if (free_pipe->stream->timing.flags.DSC == 1) {
dcn20_acquire_dsc(free_pipe->stream->ctx->dc,
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
index b7bed427bfc7..663e9335fdec 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
@@ -1805,7 +1805,7 @@ static bool dcn321_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2113,7 +2113,7 @@ struct resource_pool *dcn321_create_resource_pool(
if (!pool)
return NULL;
- if (dcn321_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn321_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
index 1315ae4adcd9..27f8f13912b3 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
@@ -1951,7 +1951,7 @@ static bool dcn35_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2269,7 +2269,7 @@ struct resource_pool *dcn35_create_resource_pool(
if (!pool)
return NULL;
- if (dcn35_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn35_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
index e9c0c0c166bb..d032db65108b 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
@@ -1924,7 +1924,7 @@ static bool dcn351_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2242,7 +2242,7 @@ struct resource_pool *dcn351_create_resource_pool(
if (!pool)
return NULL;
- if (dcn351_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn351_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
index ae59949c58ba..42fa8883d1b7 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn36/dcn36_resource.c
@@ -1921,7 +1921,7 @@ static bool dcn36_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2239,7 +2239,7 @@ struct resource_pool *dcn36_create_resource_pool(
if (!pool)
return NULL;
- if (dcn36_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn36_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
index 5a3684307c6b..6aa051154f5e 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
@@ -2006,7 +2006,7 @@ static bool dcn401_resource_construct(
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut; //4, configurable to be before or after BLND in MPCC
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
@@ -2340,7 +2340,7 @@ struct resource_pool *dcn401_create_resource_pool(
if (!pool)
return NULL;
- if (dcn401_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn401_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn42/dcn42_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn42/dcn42_resource.c
index c0d37f00fed9..d5efe1e8fcee 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn42/dcn42_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn42/dcn42_resource.c
@@ -1969,7 +1969,7 @@ static bool dcn42_resource_construct(
dc->caps.color.mpc.gamut_remap = 1;
//configurable to be before or after BLND in MPCC
- dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut;
+ dc->caps.color.mpc.num_3dluts = (uint16_t)pool->base.res_cap->num_mpc_3dlut;
dc->caps.color.mpc.num_rmcm_3dluts = 2;
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
@@ -2349,7 +2349,7 @@ struct resource_pool *dcn42_create_resource_pool(
if (!pool)
return NULL;
- if (dcn42_resource_construct(init_data->num_virtual_links, dc, pool))
+ if (dcn42_resource_construct((uint8_t)init_data->num_virtual_links, dc, pool))
return &pool->base;
BREAK_TO_DEBUGGER();
diff --git a/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c b/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c
index 1b397fa7e05c..e4811c3728a9 100644
--- a/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c
+++ b/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c
@@ -42,7 +42,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* dcfclk */
if (dc_clk_table->num_entries_per_clk.num_dcfclk_levels) {
- dml_clk_table->dcfclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dcfclk_levels;
+ dml_clk_table->dcfclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dcfclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dcfclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.dcfclk_mhz &&
@@ -52,7 +52,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->dcfclk.num_clk_values = i + 1;
} else {
dml_clk_table->dcfclk.clk_values_khz[i] = 0;
- dml_clk_table->dcfclk.num_clk_values = i;
+ dml_clk_table->dcfclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->dcfclk.clk_values_khz[i] = dc_clk_table->entries[i].dcfclk_mhz * 1000;
@@ -65,7 +65,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* fclk */
if (dc_clk_table->num_entries_per_clk.num_fclk_levels) {
- dml_clk_table->fclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_fclk_levels;
+ dml_clk_table->fclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_fclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->fclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.fclk_mhz &&
@@ -75,7 +75,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->fclk.num_clk_values = i + 1;
} else {
dml_clk_table->fclk.clk_values_khz[i] = 0;
- dml_clk_table->fclk.num_clk_values = i;
+ dml_clk_table->fclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->fclk.clk_values_khz[i] = dc_clk_table->entries[i].fclk_mhz * 1000;
@@ -88,7 +88,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* uclk */
if (dc_clk_table->num_entries_per_clk.num_memclk_levels) {
- dml_clk_table->uclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_memclk_levels;
+ dml_clk_table->uclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_memclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->uclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.memclk_mhz &&
@@ -98,7 +98,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->uclk.num_clk_values = i + 1;
} else {
dml_clk_table->uclk.clk_values_khz[i] = 0;
- dml_clk_table->uclk.num_clk_values = i;
+ dml_clk_table->uclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->uclk.clk_values_khz[i] = dc_clk_table->entries[i].memclk_mhz * 1000;
@@ -114,7 +114,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* dispclk */
if (dc_clk_table->num_entries_per_clk.num_dispclk_levels) {
- dml_clk_table->dispclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dispclk_levels;
+ dml_clk_table->dispclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dispclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dispclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.dispclk_mhz &&
@@ -124,7 +124,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->dispclk.num_clk_values = i + 1;
} else {
dml_clk_table->dispclk.clk_values_khz[i] = 0;
- dml_clk_table->dispclk.num_clk_values = i;
+ dml_clk_table->dispclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->dispclk.clk_values_khz[i] = dc_clk_table->entries[i].dispclk_mhz * 1000;
@@ -137,7 +137,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* dppclk */
if (dc_clk_table->num_entries_per_clk.num_dppclk_levels) {
- dml_clk_table->dppclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dppclk_levels;
+ dml_clk_table->dppclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dppclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dppclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.dppclk_mhz &&
@@ -147,7 +147,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->dppclk.num_clk_values = i + 1;
} else {
dml_clk_table->dppclk.clk_values_khz[i] = 0;
- dml_clk_table->dppclk.num_clk_values = i;
+ dml_clk_table->dppclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->dppclk.clk_values_khz[i] = dc_clk_table->entries[i].dppclk_mhz * 1000;
@@ -160,7 +160,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* dtbclk */
if (dc_clk_table->num_entries_per_clk.num_dtbclk_levels) {
- dml_clk_table->dtbclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dtbclk_levels;
+ dml_clk_table->dtbclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dtbclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dtbclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.dtbclk_mhz &&
@@ -170,7 +170,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->dtbclk.num_clk_values = i + 1;
} else {
dml_clk_table->dtbclk.clk_values_khz[i] = 0;
- dml_clk_table->dtbclk.num_clk_values = i;
+ dml_clk_table->dtbclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->dtbclk.clk_values_khz[i] = dc_clk_table->entries[i].dtbclk_mhz * 1000;
@@ -183,7 +183,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
/* socclk */
if (dc_clk_table->num_entries_per_clk.num_socclk_levels) {
- dml_clk_table->socclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_socclk_levels;
+ dml_clk_table->socclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_socclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->socclk.num_clk_values) {
if (use_clock_dc_limits && dc_bw_params->dc_mode_limit.socclk_mhz &&
@@ -193,7 +193,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dml_clk_table->socclk.num_clk_values = i + 1;
} else {
dml_clk_table->socclk.clk_values_khz[i] = 0;
- dml_clk_table->socclk.num_clk_values = i;
+ dml_clk_table->socclk.num_clk_values = (uint8_t)i;
}
} else {
dml_clk_table->socclk.clk_values_khz[i] = dc_clk_table->entries[i].socclk_mhz * 1000;
diff --git a/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn42/dcn42_soc_and_ip_translator.c b/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn42/dcn42_soc_and_ip_translator.c
index e723b4d0aff3..16160f35da1b 100644
--- a/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn42/dcn42_soc_and_ip_translator.c
+++ b/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn42/dcn42_soc_and_ip_translator.c
@@ -47,8 +47,8 @@ static void dcn42_convert_dc_clock_table_to_soc_bb_clock_table(
* for use with dml we need to fill in using an active value aiming for >= 2x DCFCLK
*/
if (dc_clk_table->num_entries_per_clk.num_fclk_levels && dc_clk_table->num_entries_per_clk.num_dcfclk_levels) {
- dml_clk_table->fclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dcfclk_levels;
- dml_clk_table->dcfclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dcfclk_levels;
+ dml_clk_table->fclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dcfclk_levels;
+ dml_clk_table->dcfclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dcfclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dc_clk_table->num_entries_per_clk.num_dcfclk_levels) {
int j, max_fclk = 0;
@@ -70,7 +70,7 @@ static void dcn42_convert_dc_clock_table_to_soc_bb_clock_table(
/* uclk */
if (dc_clk_table->num_entries_per_clk.num_memclk_levels) {
- dml_clk_table->uclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_memclk_levels;
+ dml_clk_table->uclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_memclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->uclk.num_clk_values) {
dml_clk_table->uclk.clk_values_khz[i] = dc_clk_table->entries[i].memclk_mhz * 1000;
@@ -84,7 +84,7 @@ static void dcn42_convert_dc_clock_table_to_soc_bb_clock_table(
/* dispclk */
if (dc_clk_table->num_entries_per_clk.num_dispclk_levels) {
- dml_clk_table->dispclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dispclk_levels;
+ dml_clk_table->dispclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dispclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dispclk.num_clk_values) {
dml_clk_table->dispclk.clk_values_khz[i] = dc_clk_table->entries[i].dispclk_mhz * 1000;
@@ -101,7 +101,7 @@ static void dcn42_convert_dc_clock_table_to_soc_bb_clock_table(
/* dppclk */
if (dc_clk_table->num_entries_per_clk.num_dppclk_levels) {
- dml_clk_table->dppclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dppclk_levels;
+ dml_clk_table->dppclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dppclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dppclk.num_clk_values) {
dml_clk_table->dppclk.clk_values_khz[i] = dc_clk_table->entries[i].dppclk_mhz * 1000;
@@ -117,7 +117,7 @@ static void dcn42_convert_dc_clock_table_to_soc_bb_clock_table(
/* dtbclk */
if (dc_clk_table->num_entries_per_clk.num_dtbclk_levels) {
- dml_clk_table->dtbclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_dtbclk_levels;
+ dml_clk_table->dtbclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_dtbclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->dtbclk.num_clk_values) {
dml_clk_table->dtbclk.clk_values_khz[i] = dc_clk_table->entries[i].dtbclk_mhz * 1000;
@@ -129,7 +129,7 @@ static void dcn42_convert_dc_clock_table_to_soc_bb_clock_table(
/* socclk */
if (dc_clk_table->num_entries_per_clk.num_socclk_levels) {
- dml_clk_table->socclk.num_clk_values = dc_clk_table->num_entries_per_clk.num_socclk_levels;
+ dml_clk_table->socclk.num_clk_values = (uint8_t)dc_clk_table->num_entries_per_clk.num_socclk_levels;
for (i = 0; i < min(DML_MAX_CLK_TABLE_SIZE, MAX_NUM_DPM_LVL); i++) {
if (i < dml_clk_table->socclk.num_clk_values) {
dml_clk_table->socclk.clk_values_khz[i] = dc_clk_table->entries[i].socclk_mhz * 1000;
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 07/19] drm/amd/display: Fix double free
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (5 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 06/19] drm/amd/display: Fix implicit narrowing conversion warnings Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 08/19] drm/amd/display: Introduce power module on Linux Chenyu Chen
` (12 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Ilya Bakoulin, Sridevi Arvindekar,
Chenyu Chen
From: Ilya Bakoulin <Ilya.Bakoulin@amd.com>
[Why/How]
Reset pointer/address to avoid double free.
Reviewed-by: Sridevi Arvindekar <sridevi.arvindekar@amd.com>
Signed-off-by: Ilya Bakoulin <Ilya.Bakoulin@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
index 72b0f3f8c2fd..e39fd97b3ffd 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
@@ -736,9 +736,12 @@ void dcn42_notify_wm_ranges(struct clk_mgr *clk_mgr_base)
clk_mgr_dcn42->smu_wm_set.mc_address.low_part);
dcn42_smu_transfer_wm_table_dram_2_smu(clk_mgr);
- if (clk_mgr_dcn42->smu_wm_set.wm_set && clk_mgr_dcn42->smu_wm_set.mc_address.quad_part != 0)
+ if (clk_mgr_dcn42->smu_wm_set.wm_set && clk_mgr_dcn42->smu_wm_set.mc_address.quad_part != 0) {
dm_helpers_free_gpu_mem(clk_mgr->base.ctx, DC_MEM_ALLOC_TYPE_GART,
clk_mgr_dcn42->smu_wm_set.wm_set);
+ clk_mgr_dcn42->smu_wm_set.wm_set = NULL;
+ clk_mgr_dcn42->smu_wm_set.mc_address.quad_part = 0;
+ }
}
@@ -1101,7 +1104,10 @@ void dcn42_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr_int)
{
struct clk_mgr_dcn42 *clk_mgr = TO_CLK_MGR_DCN42(clk_mgr_int);
- if (clk_mgr->smu_wm_set.wm_set && clk_mgr->smu_wm_set.mc_address.quad_part != 0)
+ if (clk_mgr->smu_wm_set.wm_set && clk_mgr->smu_wm_set.mc_address.quad_part != 0) {
dm_helpers_free_gpu_mem(clk_mgr_int->base.ctx, DC_MEM_ALLOC_TYPE_GART,
clk_mgr->smu_wm_set.wm_set);
+ clk_mgr->smu_wm_set.wm_set = NULL;
+ clk_mgr->smu_wm_set.mc_address.quad_part = 0;
+ }
}
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 08/19] drm/amd/display: Introduce power module on Linux
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (6 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 07/19] drm/amd/display: Fix double free Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 09/19] drm/amd/display: Add " Chenyu Chen
` (11 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Ray Wu, Chenyu Chen
From: Ray Wu <ray.wu@amd.com>
[Why]
Other OS supported by DC uses the power module to manage panel power
features such as backlight and self-refresh. It contains enhancements
on top what amdgpu_dm is doing today that can benefit power.
[How]
Introduce the power module. It's currently not being used anywhere, a
future change will incorporate it into amdgpu_dm.
Reviewed-by: Leo Li <sunpeng.li@amd.com>
Signed-off-by: Ray Wu <ray.wu@amd.com>
Signed-off-by: Leo Li <sunpeng.li@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../display/amdgpu_dm/amdgpu_dm_services.c | 11 +
.../gpu/drm/amd/display/dc/core/dc_stream.c | 6 +
drivers/gpu/drm/amd/display/dc/dc_stream.h | 3 +
.../drm/amd/display/modules/inc/mod_power.h | 415 +++
.../drm/amd/display/modules/power/Makefile | 2 +-
.../gpu/drm/amd/display/modules/power/power.c | 3030 +++++++++++++++++
6 files changed, 3466 insertions(+), 1 deletion(-)
create mode 100644 drivers/gpu/drm/amd/display/modules/inc/mod_power.h
create mode 100644 drivers/gpu/drm/amd/display/modules/power/power.c
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 8550d5e8b753..0ef7435ffda9 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -62,3 +62,14 @@ void dm_trace_smu_exit(bool success, uint32_t response, struct dc_context *ctx)
}
/**** power component interfaces ****/
+
+bool dm_query_extended_brightness_caps(struct dc_context *ctx,
+ enum dm_acpi_display_type display,
+ struct dm_acpi_atif_backlight_caps *pCaps)
+{
+ /*
+ * TODO: Implement query for extended backlight caps.
+ * Some plumbing required, see amdgpu_atif_query_backlight_caps()
+ */
+ return false;
+}
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
index cca3dece08d3..9c1d721011ca 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
@@ -259,6 +259,12 @@ const struct dc_stream_status *dc_stream_get_status_const(
return dc_state_get_stream_status(dc->current_state, stream);
}
+struct dc_link *dc_stream_get_link(
+ const struct dc_stream_state *stream)
+{
+ return stream->link;
+}
+
void program_cursor_attributes(
struct dc *dc,
struct dc_stream_state *stream)
diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
index 88f70a9b64b1..6a8c1390b85f 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
+++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
@@ -494,6 +494,9 @@ struct surface_update_descriptor dc_check_update_surfaces_for_stream(
int surface_count,
struct dc_stream_update *stream_update);
+struct dc_link *dc_stream_get_link(
+ const struct dc_stream_state *dc_stream);
+
/**
* Create a new default stream for the requested sink
*/
diff --git a/drivers/gpu/drm/amd/display/modules/inc/mod_power.h b/drivers/gpu/drm/amd/display/modules/inc/mod_power.h
new file mode 100644
index 000000000000..89037f7b7961
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/modules/inc/mod_power.h
@@ -0,0 +1,415 @@
+/* Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved. */
+
+#ifndef MODULES_INC_MOD_POWER_H_
+#define MODULES_INC_MOD_POWER_H_
+
+#include "dm_services.h"
+
+struct mod_power_init_params {
+
+ bool disable_fractional_pwm;
+
+ /* Use nits based brightness instead of brightness percentage
+ */
+ bool use_nits_based_brightness;
+ unsigned int panel_min_millinits;
+ unsigned int panel_max_millinits;
+
+ unsigned int min_backlight_pwm;
+ unsigned int max_backlight_pwm;
+
+ unsigned int min_abm_backlight;
+ unsigned int num_backlight_levels;
+ bool backlight_ramping_override;
+ unsigned int backlight_ramping_reduction;
+ unsigned int backlight_ramping_start;
+ bool def_varibright_enable;
+ unsigned int def_varibright_level;
+ unsigned int varibright_level;
+ unsigned int abm_config_setting;
+
+ bool allow_psr_smu_optimizations;
+
+ bool allow_psr_multi_disp_optimizations;
+
+ bool use_custom_backlight_caps;
+ unsigned int custom_backlight_caps_config_no;
+ bool use_linear_backlight_curve;
+};
+
+struct mod_power {
+ int dummy;
+};
+
+/* VariBright settings structure */
+struct varibright_info {
+ unsigned int level;
+ bool enable;
+ bool activate;
+};
+
+struct mod_power_psr_context {
+ /* ddc line */
+ unsigned int channel;
+ /* Transmitter id */
+ unsigned int transmitter_id;
+ /* Engine Id is used for Dig Be source select */
+ unsigned int engine_id;
+ /* Controller Id used for Dig Fe source select */
+ unsigned int controller_id;
+ /* Pcie or Uniphy */
+ unsigned int phy_type;
+ /* Physical PHY Id used by SMU interpretation */
+ unsigned int smu_phy_id;
+ /* Vertical total pixels from crtc timing.
+ * This is used for static screen detection.
+ * ie. If we want to detect half a frame,
+ * we use this to determine the hyst lines.
+ */
+ unsigned int crtc_timing_vertical_total;
+ /* PSR supported from panel capabilities and
+ * current display configuration
+ */
+ bool psr_supported_display_config;
+ /* Whether fast link training is supported by the panel */
+ bool psr_exit_link_training_required;
+ /* If RFB setup time is greater than the total VBLANK time,
+ * it is not possible for the sink to capture the video frame
+ * in the same frame the SDP is sent. In this case,
+ * the frame capture indication bit should be set and an extra
+ * static frame should be transmitted to the sink.
+ */
+ bool psr_frame_capture_indication_req;
+ /* Set the last possible line SDP may be transmitted without violating
+ * the RFB setup time or entering the active video frame.
+ */
+ unsigned int sdp_transmit_line_num_deadline;
+ /* The VSync rate in Hz used to calculate the
+ * step size for smooth brightness feature
+ */
+ unsigned int vsync_rate_hz;
+ unsigned int skip_psr_wait_for_pll_lock;
+ unsigned int number_of_controllers;
+ /* Unused, for future use. To indicate that first changed frame from
+ * state3 shouldn't result in psr_inactive, but rather to perform
+ * an automatic single frame rfb_update.
+ */
+ bool rfb_update_auto_en;
+ /* Number of frame before entering static screen */
+ unsigned int timehyst_frames;
+ /* Partial frames before entering static screen */
+ unsigned int hyst_lines;
+ /* # of repeated AUX transaction attempts to make before
+ * indicating failure to the driver
+ */
+ unsigned int aux_repeats;
+ /* Controls hw blocks to power down during PSR active state */
+ unsigned int psr_level;
+ /* Controls additional delay after remote frame capture before
+ * continuing powerd own
+ */
+ unsigned int frame_delay;
+ bool allow_smu_optimizations;
+ bool allow_multi_disp_optimizations;
+ unsigned int line_time_in_us;
+ /* Panel self refresh 2 selective update granularity required */
+ bool su_granularity_required;
+ /* psr2 selective update y granularity capability */
+ uint8_t su_y_granularity;
+ uint8_t rate_control_caps;
+ bool os_request_force_ffu;
+};
+
+enum psr_event {
+ psr_event_invalid = 0x0,
+ psr_event_vsync = 0x1,
+ psr_event_full_screen = 0x2,
+ psr_event_defer_enable = 0x4,
+ psr_event_hw_programming = 0x8,
+ psr_event_test_harness_enable_psr = 0x10,
+ psr_event_test_harness_disable_psr = 0x20,
+ psr_event_mpo_video_selective_update = 0x40,
+ psr_event_edp_panel_off_disable_psr = 0x80,
+ psr_event_dynamic_display_switch = 0x100,
+ psr_event_big_screen_video = 0x200,
+ psr_event_dds_defer_stream_enable = 0x800,
+ psr_event_dynamic_link_rate_control = 0x1000,
+ psr_event_vrr_transition = 0x2000,
+ psr_event_pause = 0x4000,
+ psr_event_immediate_flip = 0x8000,
+ psr_event_os_request_disable = 0x10000,
+ psr_event_os_request_force_ffu = 0x20000,
+ psr_event_os_override_hold = 0x40000,
+ psr_event_crc_window_active = 0x80000,
+};
+
+enum replay_event {
+ replay_event_invalid = 0x0,
+ replay_event_vsync = 0x1,
+ replay_event_full_screen = 0x2,
+ replay_event_mpo_video_selective_update = 0x4,
+ replay_event_big_screen_video = 0x8,
+ replay_event_hw_programming = 0x10,
+ replay_event_edp_panel_off_disable_psr = 0x20,
+ replay_event_general_ui = 0x40,
+ replay_event_vrr = 0x80,
+ replay_event_prepare_vtotal = 0x100,
+ replay_event_test_harness_enable_replay = 0x200,
+ replay_event_test_harness_disable_replay = 0x400,
+ replay_event_test_harness_ultra_sleep = 0x800,
+ replay_event_immediate_flip = 0x1000,
+ replay_event_vrr_transition = 0x2000,
+ replay_event_pause = 0x4000,
+ replay_event_disable_replay_while_DPMS = 0x8000,
+ replay_event_test_harness_mode = 0x10000,
+ replay_event_cursor_updating = 0x20000,
+ replay_event_sleep_resume = 0x40000,
+ replay_event_disable_in_AC = 0x80000,
+ replay_event_disable_replay_while_detect_display = 0x100000,
+ replay_event_disable_replay_while_switching_mux = 0x400000,
+ replay_event_infopacket = 0x800000,
+ replay_event_os_request_disable = 0x1000000,
+ replay_event_os_request_force_ffu = 0x2000000,
+ replay_event_os_override_hold = 0x4000000,
+ replay_event_crc_window_active = 0x8000000,
+};
+
+enum replay_enable_option {
+ pr_enable_option_static_screen = 0x1,
+ pr_enable_option_mpo_video = 0x2,
+ pr_enable_option_full_screen_video = 0x4,
+ pr_enable_option_general_ui = 0x8,
+ pr_enable_option_full_screen = 0x10,
+ pr_enable_option_static_screen_coasting = 0x10000,
+ pr_enable_option_mpo_video_coasting = 0x20000,
+ pr_enable_option_full_screen_video_coasting = 0x40000,
+ pr_enable_option_full_screen_coasting = 0x100000,
+};
+
+struct mod_power *mod_power_create(struct dc *dc,
+ struct mod_power_init_params *init_params,
+ unsigned int edp_num);
+
+void mod_power_destroy(struct mod_power *mod_power);
+
+bool mod_power_hw_init(struct mod_power *mod_power);
+
+bool mod_power_add_stream(struct mod_power *mod_power,
+ struct dc_stream_state *stream, struct psr_caps *caps);
+
+bool mod_power_remove_stream(struct mod_power *mod_power,
+ const struct dc_stream_state *stream);
+
+bool mod_power_replace_stream(struct mod_power *mod_power,
+ const struct dc_stream_state *current_stream,
+ struct dc_stream_state *new_stream,
+ struct psr_caps *new_caps);
+
+bool mod_power_set_backlight_nits(struct mod_power *mod_power,
+ struct dc_stream_state *streams,
+ unsigned int backlight_millinit,
+ unsigned int transition_time_millisec,
+ bool skip_aux,
+ bool is_hdr);
+
+bool mod_power_set_backlight_percent(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millipercent,
+ unsigned int transition_time_millisec,
+ bool is_hdr);
+
+void mod_power_update_backlight(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millipercent);
+
+void mod_power_update_backlight_nits(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millinit);
+
+bool mod_power_get_backlight_pwm(struct mod_power *mod_power,
+ unsigned int *backlight_pwm,
+ unsigned int inst);
+
+bool mod_power_get_backlight_nits(struct mod_power *mod_power,
+ unsigned int *backlight_millinit,
+ unsigned int inst);
+
+bool mod_power_get_backlight_percent(struct mod_power *mod_power,
+ unsigned int *backlight_millipercent,
+ unsigned int inst);
+
+bool mod_power_get_hw_target_backlight_pwm_nits(
+ struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millinit,
+ unsigned int inst);
+
+bool mod_power_get_hw_target_backlight_pwm_percent(
+ struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millipercent,
+ unsigned int inst);
+
+bool mod_power_get_hw_target_backlight_pwm(
+ struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_u16_16);
+
+bool mod_power_get_hw_backlight_pwm(
+ struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight);
+
+bool mod_power_get_hw_backlight_pwm_nits(
+ struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millinit,
+ unsigned int inst);
+
+bool mod_power_get_hw_backlight_aux_nits(
+ struct mod_power *mod_power,
+ struct dc_stream_state **streams, int num_streams,
+ unsigned int *backlight_millinit_avg,
+ unsigned int *backlight_millinit_peak);
+
+bool mod_power_get_hw_backlight_pwm_percent(
+ struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millipercent,
+ unsigned int inst);
+
+void mod_power_initialize_backlight_caps
+ (struct mod_power *mod_power);
+
+bool mod_power_get_panel_backlight_boundaries
+ (struct mod_power *mod_power,
+ unsigned int *out_min_backlight,
+ unsigned int *out_max_backlight,
+ unsigned int *out_ac_backlight_percent,
+ unsigned int *out_dc_backlight_percent,
+ unsigned int inst);
+
+bool mod_power_set_smooth_brightness(struct mod_power *mod_power,
+ bool enable_brightness,
+ unsigned int inst);
+
+bool mod_power_notify_mode_change(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ bool is_hdr);
+
+bool mod_power_get_varibright_level(struct mod_power *mod_power,
+ unsigned int *varibright_level);
+
+bool mod_power_get_varibright_hw_level(struct mod_power *mod_power,
+ unsigned int *varibright_level);
+
+bool mod_power_get_varibright_default_level(struct mod_power *mod_power,
+ unsigned int *varibright_level);
+
+bool mod_power_get_varibright_enable(struct mod_power *mod_power,
+ bool *varibright_enable);
+
+bool mod_power_varibright_activate(struct mod_power *mod_power,
+ bool activate, struct dc_stream_update *stream_update);
+
+bool mod_power_varibright_feature_enable(struct mod_power *mod_power,
+ bool enable, struct dc_stream_update *stream_update);
+
+
+bool mod_power_varibright_set_level(struct mod_power *mod_power,
+ unsigned int level, struct dc_stream_update *stream_update);
+
+bool mod_power_varibright_set_hw_level(struct mod_power *mod_power,
+ unsigned int level, struct dc_stream_update *stream_update);
+
+bool mod_power_is_abm_active(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int inst);
+
+
+bool mod_power_set_psr_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream, bool set_event,
+ enum psr_event event, bool wait);
+
+bool mod_power_get_psr_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int *active_psr_events);
+
+bool mod_power_get_psr_state(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ enum dc_psr_state *state);
+
+bool mod_power_get_psr_enabled_status(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ bool *psr_enabled);
+
+bool mod_power_set_replay_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream, bool set_event,
+ enum replay_event event, bool wait_for_disable);
+
+bool mod_power_get_replay_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int *active_replay_events);
+
+bool mod_power_get_replay_active_status(const struct dc_stream_state *stream,
+ bool *replay_active);
+
+bool mod_power_replay_set_coasting_vtotal(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ uint32_t coasting_vtotal, uint16_t frame_skip_number);
+
+void mod_power_replay_residency(const struct dc_stream_state *stream,
+ unsigned int *residency, const bool is_start, const bool is_alpm);
+
+bool mod_power_replay_set_power_opt_and_coasting_vtotal(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, unsigned int active_replay_events, uint32_t coasting_vtotal,
+ bool is_ultra_sleep_mode, uint16_t frame_skip_number);
+
+void mod_power_replay_set_timing_sync_supported(struct mod_power *mod_power,
+ const struct dc_stream_state *stream);
+
+void mod_power_replay_disabled_adaptive_sync_sdp(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool force_disabled);
+
+void mod_power_replay_disabled_desync_error_detection(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool force_disabled);
+void mod_power_set_low_rr_activate(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool low_rr_supported);
+
+void mod_power_set_video_conferencing_activate(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool video_conferencing_activate);
+
+void mod_power_set_live_capture_with_cvt_activate(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool live_capture_with_cvt_activate);
+
+void mod_power_set_replay_continuously_resync(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool enable);
+
+void mod_power_set_coasting_vtotal_without_frame_update(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, uint32_t coasting_vtotal);
+
+
+
+void mod_power_psr_residency(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ unsigned int *residency,
+ const uint8_t mode);
+bool mod_power_psr_get_active_psr_events(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, unsigned int *active_psr_events);
+bool mod_power_psr_set_sink_vtotal_in_psr_active(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ uint16_t psr_vtotal_idle,
+ uint16_t psr_vtotal_su);
+
+
+
+bool mod_power_backlight_percent_to_nits(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millipercent,
+ unsigned int *backlight_millinit);
+bool mod_power_backlight_nits_to_percent(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millinit,
+ unsigned int *backlight_millipercent);
+
+#endif /* MODULES_INC_MOD_POWER_H_ */
diff --git a/drivers/gpu/drm/amd/display/modules/power/Makefile b/drivers/gpu/drm/amd/display/modules/power/Makefile
index 9d1b22d35ece..b27a1ff3d86b 100644
--- a/drivers/gpu/drm/amd/display/modules/power/Makefile
+++ b/drivers/gpu/drm/amd/display/modules/power/Makefile
@@ -23,7 +23,7 @@
# Makefile for the 'power' sub-module of DAL.
#
-MOD_POWER = power_helpers.o
+MOD_POWER = power_helpers.o power.o
AMD_DAL_MOD_POWER = $(addprefix $(AMDDALPATH)/modules/power/,$(MOD_POWER))
#$(info ************ DAL POWER MODULE MAKEFILE ************)
diff --git a/drivers/gpu/drm/amd/display/modules/power/power.c b/drivers/gpu/drm/amd/display/modules/power/power.c
new file mode 100644
index 000000000000..6c73fecf57d5
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/modules/power/power.c
@@ -0,0 +1,3030 @@
+/*
+ * Copyright 2016 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ *
+ * Authors: AMD
+ *
+ */
+
+#include "dm_services.h"
+#include "dc.h"
+#include "mod_power.h"
+#include "core_types.h"
+#include "dmcu.h"
+#include "abm.h"
+#include "power_helpers.h"
+#include "dce/dmub_psr.h"
+#include "dal_asic_id.h"
+#include "link_service.h"
+#include <linux/math.h>
+
+#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
+#define DC_TRACE_LEVEL_MESSAGEP(...) /* do nothing */
+
+#define MOD_POWER_MAX_CONCURRENT_STREAMS 32
+#define SMOOTH_BRIGHTNESS_ADJUSTMENT_TIME_IN_MS 500
+#define LOW_REFRESH_RATE_DURATION_US_UPPER_BOUND 25000
+
+
+struct backlight_state {
+ /* HW uses u16.16 format for backlight PWM */
+ unsigned int backlight_pwm;
+ /* DM may call power module to set backlight
+ * targeting percent brightness
+ */
+ unsigned int backlight_millipercent;
+ /* DM may call power module to set backlight based on an explicit
+ * nits value.
+ */
+ unsigned int backlight_millinit;
+ unsigned int frame_ramp;
+ bool smooth_brightness_enabled;
+ bool isHDR;
+};
+struct power_entity {
+ struct dc_stream_state *stream;
+ struct psr_caps *caps;
+ struct mod_power_psr_context *psr_context;
+
+ /*PSR cached properties*/
+ bool psr_enabled;
+ unsigned int psr_events;
+ unsigned int psr_power_opt;
+ unsigned int replay_events;
+};
+
+struct backlight_properties {
+ bool use_nits_based_brightness;
+ bool disable_fractional_pwm;
+
+ unsigned int min_abm_backlight;
+ unsigned int num_backlight_levels;
+
+ bool backlight_ramping_override;
+ unsigned int backlight_ramping_reduction;
+ unsigned int backlight_ramping_start;
+
+ /* Backlight cached properties */
+ unsigned int ac_backlight_percent;
+ unsigned int dc_backlight_percent;
+
+ /* backlight LUT stored in HW u16.16 format*/
+ unsigned int *backlight_lut;
+ unsigned int min_backlight_pwm;
+ unsigned int max_backlight_pwm;
+ unsigned int backlight_range;
+
+ /* Describes the panel's min and max luminance in millinits measured
+ * on full white screen, in min and max backlight settings.
+ */
+ unsigned int min_brightness_millinits;
+ unsigned int max_brightness_millinits;
+ unsigned int nits_range;
+
+ bool backlight_caps_valid;
+ bool use_custom_backlight_caps;
+ unsigned int custom_backlight_caps_config_no;
+ bool use_linear_backlight_curve;
+};
+
+struct dmcu_varibright_cached_properties {
+ unsigned int varibright_config_setting;
+ unsigned int varibright_level;
+ unsigned int varibright_hw_level;
+ unsigned int def_varibright_level;
+ bool varibright_user_enable;
+ bool varibright_active;
+};
+
+struct core_power {
+ struct mod_power public;
+ struct dc *dc;
+ struct power_entity *map;
+ struct dmcu_varibright_cached_properties varibright_prop;
+ struct backlight_properties bl_prop[MAX_NUM_EDP];
+ struct backlight_state bl_state[MAX_NUM_EDP];
+ unsigned int edp_num;
+
+ bool psr_smu_optimizations_support;
+ bool multi_disp_optimizations_support;
+
+ int num_entities;
+};
+
+union dmcu_abm_set_bl_params {
+ struct {
+ unsigned int gradual_change : 1; /* [0:0] */
+ unsigned int reserved : 15; /* [15:1] */
+ unsigned int frame_ramp : 16; /* [31:16] */
+ } bits;
+ unsigned int u32All;
+};
+
+/* If system or panel does not report some sort of brightness percent to nits
+ * mapping, we will use following default values so backlight control using
+ * nits based interfaces will still work, but might not describe panel
+ * correctly. In this case percentage based backlight control should ideally
+ * be used.
+ * Min = 5 nits
+ * Max = 300 nits
+ */
+
+static const unsigned int pwr_default_min_brightness_millinits = 1000;
+static const unsigned int pwr_default_sdr_brightness_millinits = 270000;
+
+static const unsigned int default_ac_backlight_percent = 100;
+static const unsigned int default_dc_backlight_percent = 70;
+
+#define MOD_POWER_TO_CORE(mod_power)\
+ container_of(mod_power, struct core_power, public)
+
+static unsigned int calc_psr_num_static_frames(unsigned int vsync_rate_hz)
+{
+ /* Calculate number of static frames before generating interrupt to
+ * enter PSR.
+ */
+ unsigned int frame_time_microsec = 1000000 / vsync_rate_hz;
+
+ // Init fail safe of 2 frames static
+ unsigned int num_frames_static = 2;
+
+ /* Round up
+ * Calculate number of frames such that at least 30 ms of time has
+ * passed.
+ */
+ if (vsync_rate_hz != 0)
+ num_frames_static = (30000 / frame_time_microsec) + 1;
+
+ return num_frames_static;
+}
+
+/* Given a specific dc_stream* this function finds its equivalent
+ * on the core_freesync->map and returns the corresponding index
+ */
+static unsigned int map_index_from_stream(struct core_power *core_power,
+ const struct dc_stream_state *stream)
+{
+ unsigned int index = 0;
+
+ for (index = 0; index < core_power->num_entities; index++) {
+ if (core_power->map[index].stream == stream)
+ return index;
+ }
+ /* Could not find stream requested, this is not trivial, fix when hit*/
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "map index from stream: ERROR: core_power=%p stream=%p",
+ core_power,
+ stream);
+ ASSERT(false);
+ /* We come here only when we can't map stream index.
+ * In good cases, this would happen when we attempt to change
+ * brightness before stream creation, in which case we create a
+ * dummy stream with index 0.
+ * With external monitor connected, the index passed from this return
+ * is 1. Passing anything greater than 0 from here would always point
+ * to bad memory.
+ */
+ return 0;
+}
+
+static uint16_t backlight_8_to_16(unsigned int backlight_8bit)
+{
+ return (uint16_t)(backlight_8bit * 0x101);
+}
+
+
+static unsigned int backlight_millipercent_to_millinit(
+ struct core_power *core_power, unsigned int millipercent, unsigned int inst)
+{
+ unsigned int millinit = 0;
+ unsigned long long numerator = 0;
+
+ if (core_power == NULL)
+ return 0;
+
+ numerator = ((unsigned long long)millipercent) *
+ core_power->bl_prop[inst].nits_range;
+ millinit = ((unsigned int)div_u64(numerator, 100000)) +
+ core_power->bl_prop[inst].min_brightness_millinits;
+
+ return millinit;
+}
+
+static unsigned int backlight_millinit_to_millipercent(
+ struct core_power *core_power, unsigned int millinit, unsigned int inst)
+{
+ unsigned int millipercent = 0;
+ unsigned long long numerator = 0;
+
+ if (core_power == NULL)
+ return 0;
+
+ if (millinit <= core_power->bl_prop[inst].min_brightness_millinits)
+ return 0;
+
+ if (millinit >= core_power->bl_prop[inst].max_brightness_millinits)
+ return (100 * 1000);
+
+ numerator = (((unsigned long long)millinit) -
+ core_power->bl_prop[inst].min_brightness_millinits) * 100000;
+ millipercent = ((unsigned int)div_u64(numerator,
+ core_power->bl_prop[inst].nits_range));
+
+ return millipercent;
+}
+
+static unsigned int backlight_pwm_to_millipercent(
+ struct core_power *core_power, unsigned int pwm, unsigned int inst)
+{
+ unsigned int millipercent = 0;
+ unsigned int max_index = 0;
+
+ if (core_power == NULL)
+ return 0;
+
+ if (!core_power->bl_prop[inst].backlight_caps_valid)
+ return 0;
+
+ /* Doesn't really make sense to have one single backlight level
+ * possible...
+ */
+ if (core_power->bl_prop[inst].num_backlight_levels < 2)
+ return 0;
+
+ max_index = core_power->bl_prop[inst].num_backlight_levels - 1;
+
+ if (pwm <= core_power->bl_prop[inst].backlight_lut[0])
+ return 0;
+
+ if (pwm > core_power->bl_prop[inst].backlight_lut[max_index])
+ return (100 * 1000);
+
+ /* We need to do a binary search over the array for where the pwm level
+ * is in the lut. Based on the index we can determine percentage.
+ */
+ unsigned int min = 0;
+ unsigned int max = max_index;
+ unsigned int mid = 0;
+
+ while (max >= min) {
+ mid = (min + max) / 2; /* floor of half range */
+
+ if (core_power->bl_prop[inst].backlight_lut[mid] < pwm)
+ min = mid + 1;
+ else if (core_power->bl_prop[inst].backlight_lut[mid] > pwm)
+ max = mid - 1;
+ else
+ break;
+ }
+
+ /* In this case, exact match is not found. Check if mid/min/max
+ * value is actually closer.
+ */
+ if (max < min) {
+ unsigned int min_delta;
+ unsigned int mid_delta;
+ unsigned int max_delta;
+
+ min_delta = (core_power->bl_prop[inst].backlight_lut[min] > pwm) ?
+ core_power->bl_prop[inst].backlight_lut[min] - pwm :
+ pwm - core_power->bl_prop[inst].backlight_lut[min];
+
+ mid_delta = (core_power->bl_prop[inst].backlight_lut[mid] > pwm) ?
+ core_power->bl_prop[inst].backlight_lut[mid] - pwm :
+ pwm - core_power->bl_prop[inst].backlight_lut[mid];
+
+ max_delta = (core_power->bl_prop[inst].backlight_lut[max] > pwm) ?
+ core_power->bl_prop[inst].backlight_lut[max] - pwm :
+ pwm - core_power->bl_prop[inst].backlight_lut[max];
+
+ if ((min_delta < mid_delta) && (min_delta < max_delta))
+ mid = min;
+
+ if ((max_delta < mid_delta) && (max_delta < min_delta))
+ mid = max;
+ }
+
+ /* No interpolation, just take closest index */
+ millipercent = 1000 * 100 * mid / max_index;
+
+ return millipercent;
+}
+
+static unsigned int backlight_pwm_to_millinit(
+ struct core_power *core_power, unsigned int pwm, unsigned int inst)
+{
+ unsigned int millinit = 0;
+
+ if (core_power == NULL)
+ return 0;
+
+ if (pwm <= core_power->bl_prop[inst].min_backlight_pwm)
+ return core_power->bl_prop[inst].min_brightness_millinits;
+
+ if (pwm >= core_power->bl_prop[inst].max_backlight_pwm)
+ return core_power->bl_prop[inst].max_brightness_millinits;
+
+ millinit = ((unsigned int)div_u64(((unsigned long long)pwm -
+ core_power->bl_prop[inst].min_backlight_pwm) *
+ core_power->bl_prop[inst].nits_range,
+ core_power->bl_prop[inst].backlight_range));
+
+ millinit += core_power->bl_prop[inst].min_brightness_millinits;
+
+ if (millinit > core_power->bl_prop[inst].max_brightness_millinits)
+ millinit = core_power->bl_prop[inst].max_brightness_millinits;
+
+ return millinit;
+}
+
+static unsigned int backlight_millipercent_to_pwm(
+ struct core_power *core_power, unsigned int millipercent, unsigned int inst)
+{
+ unsigned int pwm = (unsigned int)-1;
+ unsigned int index = 0;
+
+ if (core_power == NULL)
+ return 0;
+
+ // Bypass the brightness mapping LUT
+ if (core_power->bl_prop->use_linear_backlight_curve) {
+ pwm = core_power->bl_prop[inst].min_backlight_pwm +
+ (unsigned int) div_u64((unsigned long long) millipercent *
+ core_power->bl_prop[inst].backlight_range,
+ 100000);
+
+ if (pwm > core_power->bl_prop[inst].max_backlight_pwm)
+ pwm = core_power->bl_prop[inst].max_backlight_pwm;
+
+ return pwm;
+ }
+
+ if (millipercent >= (100 * 1000))
+ return core_power->bl_prop[inst].backlight_lut[core_power->bl_prop[inst].num_backlight_levels - 1];
+
+ /* This will give the floor index. */
+ index = ((core_power->bl_prop[inst].num_backlight_levels - 1) *
+ millipercent) / 100000;
+ /* Null check otherwise eDP doesn't lightup when connected to DP1 */
+ if (core_power->bl_prop[inst].backlight_lut == NULL)
+ return pwm;
+
+ pwm = core_power->bl_prop[inst].backlight_lut[index];
+
+ return pwm;
+}
+
+static unsigned int backlight_millinit_to_pwm(
+ struct core_power *core_power, unsigned int millinit, unsigned int inst)
+{
+ unsigned int pwm = 0;
+
+ if (core_power == NULL)
+ return 0;
+
+ /* For nits based brightness, the signal will be a value
+ * between the minimum and maximum value.
+ */
+ if (millinit >= core_power->bl_prop[inst].max_brightness_millinits)
+ return core_power->bl_prop[inst].max_backlight_pwm;
+ else if (millinit <= core_power->bl_prop[inst].min_brightness_millinits)
+ return core_power->bl_prop[inst].min_backlight_pwm;
+
+ pwm = ((unsigned int)div_u64(((unsigned long long)millinit -
+ core_power->bl_prop[inst].min_brightness_millinits) *
+ core_power->bl_prop[inst].backlight_range,
+ core_power->bl_prop[inst].nits_range));
+
+ pwm += core_power->bl_prop[inst].min_backlight_pwm;
+
+ if (pwm > core_power->bl_prop[inst].max_backlight_pwm)
+ pwm = core_power->bl_prop[inst].max_backlight_pwm;
+
+ return pwm;
+}
+
+static bool validate_ext_backlight_caps(
+ struct dm_acpi_atif_backlight_caps *ext_backlight_caps)
+{
+ unsigned int i;
+ unsigned int num_of_data_points = 0;
+ unsigned int last_signal_level = 0;
+ unsigned int last_luminance = 0;
+
+ num_of_data_points = ext_backlight_caps->num_data_points;
+
+ /* Validation rules:
+ * 1. BIOS should carry customized data points and
+ * the number of data points should not be larger than 99.
+ * 2. The max_input_signal should be larger than min_input_signal.
+ * 3. For each data point:
+ * a. luminance should be in ascending order and
+ * should not be 0 or 100 since the corresponding signal_level
+ * are assigned by min_input_signal and max_input_signal.
+ * b. signal_level should be in ascending order and
+ * be within the range of min/max_input_signal.
+ */
+ if (num_of_data_points > BL_DATA_POINTS)
+ return false;
+
+ if (ext_backlight_caps->min_input_signal >= ext_backlight_caps->max_input_signal)
+ return false;
+
+ last_signal_level = ext_backlight_caps->min_input_signal;
+ for (i = 0; i < num_of_data_points; i++) {
+ unsigned int luminance = ext_backlight_caps->data_points[i].luminance;
+ unsigned int signal_level = ext_backlight_caps->data_points[i].signal_level;
+
+ if ((luminance <= last_luminance) || (luminance > BL_DATA_POINTS))
+ return false;
+
+ if ((signal_level <= last_signal_level) || (signal_level >= ext_backlight_caps->max_input_signal))
+ return false;
+
+ last_signal_level = signal_level;
+ last_luminance = luminance;
+ }
+
+ return true;
+}
+
+/* hard coded to default backlight curve. */
+static void initialize_backlight_caps(struct core_power *core_power, unsigned int inst)
+{
+ unsigned int i;
+ struct dm_acpi_atif_backlight_caps *ext_backlight_caps = NULL;
+ bool custom_curve_present = false;
+ unsigned int num_levels = 0;
+ struct dc *dc = NULL;
+ enum dm_acpi_display_type acpi_display_type =
+ (inst == 0) ? AcpiDisplayType_LCD1 : AcpiDisplayType_LCD2;
+
+ if (core_power == NULL)
+ return;
+ dc = core_power->dc;
+
+ num_levels = core_power->bl_prop[inst].num_backlight_levels;
+
+ /* Allocate memory for ATIF output
+ * (do not want to use 256 bytes on the stack)
+ */
+ ext_backlight_caps = (struct dm_acpi_atif_backlight_caps *)
+ (kzalloc(sizeof(struct dm_acpi_atif_backlight_caps),
+ GFP_KERNEL));
+
+ if (ext_backlight_caps == NULL)
+ return;
+
+ /* Retrieve ACPI extended brightness caps */
+ if (dm_query_extended_brightness_caps
+ (dc->ctx, acpi_display_type, ext_backlight_caps)) {
+ custom_curve_present = validate_ext_backlight_caps(ext_backlight_caps);
+ }
+
+ if (core_power->bl_prop[inst].use_custom_backlight_caps &&
+ fill_custom_backlight_caps(
+ core_power->bl_prop[inst].custom_backlight_caps_config_no,
+ ext_backlight_caps)) {
+ custom_curve_present = validate_ext_backlight_caps(ext_backlight_caps);
+ }
+
+ if (custom_curve_present) {
+ unsigned int index = 1;
+ unsigned int num_of_data_points = ext_backlight_caps->num_data_points;
+
+ core_power->bl_prop[inst].ac_backlight_percent =
+ ext_backlight_caps->ac_level_percentage;
+ core_power->bl_prop[inst].dc_backlight_percent =
+ ext_backlight_caps->dc_level_percentage;
+ core_power->bl_prop[inst].backlight_lut[0] =
+ backlight_8_to_16(
+ ext_backlight_caps->min_input_signal);
+ core_power->bl_prop[inst].backlight_lut[num_levels - 1] =
+ backlight_8_to_16(
+ ext_backlight_caps->max_input_signal);
+
+ /* Filling translation table from data points -
+ * between every two provided data points we
+ * lineary interpolate missing values
+ */
+ for (i = 0; i < num_of_data_points; i++) {
+ unsigned int luminance =
+ ext_backlight_caps->data_points[i].luminance;
+ unsigned int signal_level =
+ backlight_8_to_16(
+ ext_backlight_caps->data_points[i].signal_level);
+
+ /* Since luminance is a percentage, scale it by num_levels*/
+ luminance = (luminance * num_levels) / 101;
+
+ /* Lineary interpolate missing values */
+ if (index < luminance) {
+ unsigned int base_value =
+ core_power->bl_prop[inst].backlight_lut[index-1];
+ unsigned int delta_signal =
+ signal_level - base_value;
+ unsigned int delta_luma =
+ luminance - index + 1;
+ unsigned int step = delta_signal;
+
+ for (; index < luminance; index++) {
+ core_power->bl_prop[inst].backlight_lut[index] =
+ base_value + (step / delta_luma);
+ step += delta_signal;
+ }
+ }
+
+ /* Now [index == luminance],
+ * so we can add data point to the translation table
+ */
+ core_power->bl_prop[inst].backlight_lut[index++] = signal_level;
+ }
+
+ /* Complete the final segment of interpolation -
+ * between last datapoint and maximum value
+ */
+ if (index < num_levels - 1) {
+ unsigned int base_value =
+ core_power->bl_prop[inst].backlight_lut[index-1];
+ unsigned int delta_signal =
+ core_power->bl_prop[inst].backlight_lut[num_levels - 1] -
+ base_value;
+ unsigned int delta_luma = num_levels - index;
+ unsigned int step = delta_signal;
+
+ for (; index < num_levels - 1; index++) {
+ core_power->bl_prop[inst].backlight_lut[index] =
+ base_value + (step / delta_luma);
+ step += delta_signal;
+ }
+ }
+ /* Build backlight translation table based on default curve */
+ } else {
+ /* Defines default backlight curve F(x) = A(x*x) + Bx + C.
+ *
+ * Backlight curve should always satisfy:
+ * F(0) = min, F(100) = max,
+ * So polynom coefficients are:
+ * A is 0.0255 - B/100 - min/10000 - (255-max)/10000 =
+ * (max - min)/10000 - B/100
+ * B is adjustable factor to modify the curve.
+ * Bigger B results in less concave curve.
+ * B range is [0..(max-min)/100]
+ * C is backlight minimum
+ */
+ unsigned int backlight_curve_coeff_a_factor =
+ num_levels * num_levels;
+ unsigned int backlight_curve_coeff_b = num_levels;
+ unsigned int delta =
+ core_power->bl_prop[inst].backlight_lut[num_levels - 1] -
+ core_power->bl_prop[inst].backlight_lut[0];
+ unsigned int coeffC = core_power->bl_prop[inst].backlight_lut[0];
+ unsigned int coeffB =
+ (backlight_curve_coeff_b < delta ?
+ backlight_curve_coeff_b : delta);
+ unsigned long long coeffA = delta - coeffB; /* coeffB is B*100 */
+
+ for (i = 1; i < num_levels - 1; i++) {
+ uint64_t lut_val = div_u64(coeffA * i * i, backlight_curve_coeff_a_factor) +
+ div_u64((uint64_t)coeffB * i, backlight_curve_coeff_b) + coeffC;
+
+ ASSERT(lut_val <= 0xFFFFFFFF);
+ core_power->bl_prop[inst].backlight_lut[i] = (unsigned int)lut_val;
+ }
+ }
+
+ if (ext_backlight_caps != NULL)
+ kfree(ext_backlight_caps);
+
+ /* Successfully initialized */
+ core_power->bl_prop[inst].backlight_caps_valid = true;
+}
+
+static void varibright_set_level(struct core_power *core_power)
+{
+ if (!core_power->varibright_prop.varibright_active ||
+ !core_power->varibright_prop.varibright_user_enable)
+ core_power->varibright_prop.varibright_hw_level = 0;
+ else
+ core_power->varibright_prop.varibright_hw_level =
+ core_power->varibright_prop.varibright_level;
+}
+
+bool mod_power_hw_init(struct mod_power *mod_power)
+{
+ struct core_power *core_power = NULL;
+ struct dc *dc = NULL;
+ struct dmcu *dmcu = NULL;
+ struct dmcu_iram_parameters params;
+ int i;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ dc = core_power->dc;
+
+ for (i = 0; i < core_power->edp_num; i++) {
+ params.set = core_power->varibright_prop.varibright_config_setting;
+ params.backlight_ramping_override = core_power->bl_prop[i].backlight_ramping_override;
+ params.backlight_ramping_reduction = core_power->bl_prop[i].backlight_ramping_reduction;
+ params.backlight_ramping_start = core_power->bl_prop[i].backlight_ramping_start;
+ params.backlight_lut_array = core_power->bl_prop[i].backlight_lut;
+ params.backlight_lut_array_size = core_power->bl_prop[i].num_backlight_levels;
+ params.min_abm_backlight = core_power->bl_prop[i].min_abm_backlight;
+
+ dmcu = dc->res_pool->dmcu;
+
+ // In the case where abm is implemented on dmcub,
+ // dmcu object will be null.
+ // ABM 2.4 and up are implemented on dmcub.
+ if (dmcu) {
+ //DMCU does not support multiple eDP
+ return dmcu_load_iram(dmcu, params);
+ } else if (dc->ctx->dmub_srv) {
+ if (!dmub_init_abm_config(dc->res_pool, params, i))
+ return false;
+ } else
+ return false;
+ }
+ return true;
+}
+
+struct mod_power *mod_power_create(struct dc *dc,
+ struct mod_power_init_params *init_params,
+ unsigned int edp_num)
+{
+ struct core_power *core_power = NULL;
+ int i = 0;
+ int abm_max_config = 0;
+ unsigned int inst = 0;
+ bool is_brightness_range_valid = false;
+
+ if (dc == NULL)
+ goto fail_dc_null;
+
+ core_power = kzalloc(sizeof(struct core_power), GFP_KERNEL);
+
+ if (core_power == NULL)
+ goto fail_alloc_context;
+
+ core_power->edp_num = edp_num;
+ core_power->map = kzalloc(sizeof(struct power_entity) * MOD_POWER_MAX_CONCURRENT_STREAMS,
+ GFP_KERNEL);
+
+ if (core_power->map == NULL)
+ goto fail_alloc_map;
+
+ for (i = 0; i < MOD_POWER_MAX_CONCURRENT_STREAMS; i++) {
+ core_power->map[i].stream = NULL;
+ }
+
+ for (i = 0; i < MOD_POWER_MAX_CONCURRENT_STREAMS; i++) {
+ core_power->map[i].psr_context =
+ kzalloc(sizeof(struct mod_power_psr_context),
+ GFP_KERNEL);
+ if (core_power->map[i].psr_context == NULL)
+ goto fail_construct;
+ }
+
+ core_power->psr_smu_optimizations_support = init_params->allow_psr_smu_optimizations;
+ core_power->multi_disp_optimizations_support = init_params->allow_psr_multi_disp_optimizations;
+
+ for (inst = 0; inst < edp_num; inst++) {
+ core_power->bl_prop[inst].min_abm_backlight =
+ init_params[inst].min_abm_backlight;
+ core_power->bl_prop[inst].disable_fractional_pwm =
+ init_params[inst].disable_fractional_pwm;
+ core_power->bl_prop[inst].use_linear_backlight_curve =
+ init_params[inst].use_linear_backlight_curve;
+ core_power->bl_prop[inst].use_nits_based_brightness =
+ init_params[inst].use_nits_based_brightness;
+ core_power->bl_prop[inst].backlight_ramping_override =
+ init_params[inst].backlight_ramping_override;
+ core_power->bl_prop[inst].backlight_ramping_reduction =
+ init_params[inst].backlight_ramping_reduction;
+ core_power->bl_prop[inst].backlight_ramping_start =
+ init_params[inst].backlight_ramping_start;
+ core_power->bl_prop[inst].use_custom_backlight_caps =
+ init_params[inst].use_custom_backlight_caps;
+ core_power->bl_prop[inst].custom_backlight_caps_config_no =
+ init_params[inst].custom_backlight_caps_config_no;
+
+ // Do not allow less than 101 backlight levels
+ if (init_params[inst].num_backlight_levels < 101)
+ core_power->bl_prop[inst].num_backlight_levels = 101;
+ else
+ core_power->bl_prop[inst].num_backlight_levels =
+ init_params[inst].num_backlight_levels;
+
+ core_power->bl_prop[inst].backlight_lut = (unsigned int *)
+ (kzalloc(sizeof(unsigned int) *
+ core_power->bl_prop[inst].num_backlight_levels, GFP_KERNEL));
+ if (core_power->bl_prop[inst].backlight_lut == NULL)
+ goto fail_alloc_backlight_array;
+ }
+
+ core_power->varibright_prop.varibright_active = false;
+
+ core_power->varibright_prop.varibright_user_enable =
+ init_params->def_varibright_enable;
+
+ // Table of ABM levels here is 1-4, but level 0 also exists as 'off'
+ if (init_params->varibright_level <= abm_defines_max_level) {
+ core_power->varibright_prop.varibright_level =
+ init_params->varibright_level;
+
+ } else {
+ core_power->varibright_prop.varibright_level = 3;
+ }
+ if (init_params->def_varibright_level <= abm_defines_max_level) {
+ core_power->varibright_prop.def_varibright_level =
+ init_params->def_varibright_level;
+ } else {
+ core_power->varibright_prop.def_varibright_level = 3;
+ }
+
+ // ABM used to contain 4 different configs. There is only 3 since ABM 2.3.
+ if ((dc->res_pool->dmcu != NULL) && (dc->res_pool->dmcu->dmcu_version.abm_version < 0x23))
+ abm_max_config = 4;
+ else
+ abm_max_config = 3;
+
+ if (init_params->abm_config_setting < abm_max_config)
+ core_power->varibright_prop.varibright_config_setting =
+ init_params->abm_config_setting;
+ else
+ core_power->varibright_prop.varibright_config_setting = 0;
+
+ for (inst = 0; inst < edp_num; inst++) {
+ core_power->bl_prop[inst].backlight_lut[0] = init_params[inst].min_backlight_pwm;
+ core_power->bl_prop[inst].backlight_lut[
+ core_power->bl_prop[inst].num_backlight_levels-1] =
+ init_params[inst].max_backlight_pwm;
+ core_power->bl_prop[inst].min_backlight_pwm = init_params[inst].min_backlight_pwm;
+ core_power->bl_prop[inst].max_backlight_pwm = init_params[inst].max_backlight_pwm;
+ core_power->bl_prop[inst].ac_backlight_percent =
+ default_ac_backlight_percent;
+ core_power->bl_prop[inst].dc_backlight_percent =
+ default_dc_backlight_percent;
+ core_power->bl_prop[inst].backlight_caps_valid = false;
+
+ if (core_power->bl_prop[inst].use_nits_based_brightness) {
+ core_power->bl_prop[inst].min_brightness_millinits =
+ init_params[inst].panel_min_millinits;
+ core_power->bl_prop[inst].max_brightness_millinits =
+ init_params[inst].panel_max_millinits;
+ } else {
+
+ core_power->bl_prop[inst].min_brightness_millinits =
+ pwr_default_min_brightness_millinits;
+ core_power->bl_prop[inst].max_brightness_millinits =
+ pwr_default_sdr_brightness_millinits;
+ }
+
+ core_power->bl_prop[inst].backlight_range =
+ core_power->bl_prop[inst].max_backlight_pwm-
+ core_power->bl_prop[inst].min_backlight_pwm;
+
+ core_power->bl_prop[inst].nits_range =
+ core_power->bl_prop[inst].max_brightness_millinits -
+ core_power->bl_prop[inst].min_brightness_millinits;
+
+ core_power->bl_state[inst].smooth_brightness_enabled = true;
+ }
+
+ /* Check if at least 1 instance in core_power is populated before failing */
+ for (inst = 0; inst < edp_num; inst++) {
+ if (core_power->bl_prop[inst].nits_range != 0 && core_power->bl_prop[inst].backlight_range != 0) {
+ is_brightness_range_valid = true;
+ break;
+ }
+
+ }
+ if (!is_brightness_range_valid)
+ goto fail_bad_brightness_range;
+
+ core_power->num_entities = 0;
+
+ core_power->dc = dc;
+ for (inst = 0; inst < edp_num; inst++) {
+ initialize_backlight_caps(core_power, inst);
+ core_power->bl_state[inst].backlight_millipercent =
+ core_power->bl_prop[inst].dc_backlight_percent * 1000;
+ core_power->bl_state[inst].backlight_pwm = backlight_millipercent_to_pwm(core_power,
+ core_power->bl_state[inst].backlight_millipercent, inst);
+ core_power->bl_state[inst].backlight_millinit = backlight_millipercent_to_millinit(core_power,
+ core_power->bl_state[inst].backlight_millipercent, inst);
+ }
+
+ return &core_power->public;
+
+fail_bad_brightness_range:
+fail_alloc_backlight_array:
+ for (inst = 0; inst < edp_num; inst++)
+ if (core_power->bl_prop[inst].backlight_lut)
+ kfree(core_power->bl_prop[inst].backlight_lut);
+fail_construct:
+ for (i = 0; i < MOD_POWER_MAX_CONCURRENT_STREAMS; i++) {
+ if (core_power->map[i].psr_context)
+ kfree(core_power->map[i].psr_context);
+ }
+ kfree(core_power->map);
+
+fail_alloc_map:
+ kfree(core_power);
+
+fail_alloc_context:
+fail_dc_null:
+ return NULL;
+}
+
+void mod_power_destroy(struct mod_power *mod_power)
+{
+ if (mod_power != NULL) {
+ int i;
+ struct core_power *core_power =
+ MOD_POWER_TO_CORE(mod_power);
+
+ for (i = 0; i < MOD_POWER_MAX_CONCURRENT_STREAMS; i++)
+ if (core_power->map[i].psr_context)
+ kfree(core_power->map[i].psr_context);
+
+ for (i = 0; i < core_power->num_entities; i++)
+ if (core_power->map[i].stream)
+ dc_stream_release(core_power->map[i].stream);
+
+ kfree(core_power->map);
+
+ for (i = 0; i < MAX_NUM_EDP; i++)
+ if (core_power->bl_prop[i].backlight_lut)
+ kfree(core_power->bl_prop[i].backlight_lut);
+
+ kfree(core_power);
+ }
+}
+
+bool mod_power_add_stream(struct mod_power *mod_power,
+ struct dc_stream_state *stream, struct psr_caps *caps)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities < MOD_POWER_MAX_CONCURRENT_STREAMS) {
+ dc_stream_retain(stream);
+
+ core_power->map[core_power->num_entities].stream = stream;
+ core_power->map[core_power->num_entities].caps = caps;
+
+ // initialize cached PSR params to something "safe" (something that is
+ // consistent with disabled PSR state)
+ core_power->map[core_power->num_entities].psr_enabled = 0;
+ core_power->map[core_power->num_entities].psr_events = psr_event_vsync;
+ core_power->map[core_power->num_entities].psr_power_opt = 0;
+ core_power->num_entities++;
+ return true;
+ }
+
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power: add_stream: ERROR: stream=%p num_entities=%d >= MOD_POWER_MAX_CONCURRENT_STREAMS",
+ stream,
+ core_power->num_entities);
+
+ return false;
+}
+
+bool mod_power_remove_stream(struct mod_power *mod_power,
+ const struct dc_stream_state *stream)
+{
+ int i = 0;
+ struct core_power *core_power = NULL;
+ unsigned int index = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ if (core_power->num_entities == 0) {
+ /* trying to remove a stream a second time or have not added yet */
+ BREAK_TO_DEBUGGER();
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power: remove_stream: ERROR: num_entities=0 stream=%p",
+ stream);
+ return false;
+ }
+
+ index = map_index_from_stream(core_power, stream);
+
+ if (index >= core_power->num_entities) {
+ /* trying to remove a stream a second time or have not added yet */
+ BREAK_TO_DEBUGGER();
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power: remove_stream: ERROR: index=%u >= num_entities=%d stream=%p",
+ index,
+ core_power->num_entities,
+ stream);
+ return false;
+ }
+
+ dc_stream_release(core_power->map[index].stream);
+ core_power->map[index].stream = NULL;
+ /* To remove this entity, shift everything after down */
+ for (i = index; i < core_power->num_entities - 1; i++) {
+ core_power->map[i].stream = core_power->map[i + 1].stream;
+ core_power->map[i].caps = core_power->map[i + 1].caps;
+
+ // copy over cached parameters in case they map to PSR capable display
+ core_power->map[i].psr_enabled = core_power->map[i + 1].psr_enabled;
+ core_power->map[i].psr_events = core_power->map[i + 1].psr_events;
+ core_power->map[i].psr_power_opt = core_power->map[i + 1].psr_power_opt;
+
+ memcpy(core_power->map[i].psr_context, core_power->map[i + 1].psr_context, sizeof(struct mod_power_psr_context));
+ memset(core_power->map[i + 1].psr_context, 0, sizeof(struct mod_power_psr_context));
+ }
+ core_power->num_entities--;
+
+ return true;
+}
+
+/*
+ * Replace_stream should be used when there is a mode set for existing
+ * display target with a valid stream. In this case might need to retain
+ * cached PSR state (events, power opt, en/dis) if we are dealing with PSR
+ * capable display. If mod_power_remove and mod_power_add are used instead,
+ * then stream may be assigned to a different slot and may end up with
+ * wrong cached PSR state. It is hard to tell which PSR events should
+ * persist through mode set or what psr_events should be initialized to, so
+ * it might be better just to retain them all.
+ */
+bool mod_power_replace_stream(struct mod_power *mod_power,
+ const struct dc_stream_state *current_stream,
+ struct dc_stream_state *new_stream,
+ struct psr_caps *new_caps)
+{
+ struct core_power *core_power = NULL;
+ unsigned int index = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ if (core_power->num_entities == 0) {
+ /* no streams exist in the table yet */
+ BREAK_TO_DEBUGGER();
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power: replace_stream: ERROR: num_entities=0 stream=%p",
+ current_stream);
+ return false;
+ }
+
+ index = map_index_from_stream(core_power, current_stream);
+
+ if (index >= core_power->num_entities) {
+ /* trying to replace a non-existent stream */
+ BREAK_TO_DEBUGGER();
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power: replace_stream: ERROR: index=%u >= num_entities=%d stream=%p",
+ index,
+ core_power->num_entities,
+ current_stream);
+ return false;
+ }
+
+ dc_stream_release(core_power->map[index].stream);
+ dc_stream_retain(new_stream);
+ core_power->map[index].stream = new_stream;
+ core_power->map[index].caps = new_caps;
+ memset(core_power->map[index].psr_context, 0, sizeof(struct mod_power_psr_context));
+
+ return true;
+}
+
+static bool set_backlight_millinits_aux(struct core_power *core_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millinits,
+ unsigned int transition_time_millisec,
+ unsigned int inst)
+{
+ struct dc_link *link = NULL;
+
+ if (core_power == NULL)
+ return false;
+
+ if (stream == NULL)
+ return true;
+
+ link = dc_stream_get_link(stream);
+
+ return dc_link_set_backlight_level_nits(link, core_power->bl_state[inst].isHDR,
+ backlight_millinits, transition_time_millisec);
+}
+
+static bool set_backlight(struct core_power *core_power,
+ struct dc_stream_state *stream,
+ struct set_backlight_level_params *backlight_level_params,
+ unsigned int inst)
+{
+ bool retv = false;
+ unsigned int frame_ramp = 0;
+ unsigned int vsync_rate_hz;
+ union dmcu_abm_set_bl_params params;
+ const struct dc_link *link = NULL;
+ unsigned int backlight_pwm_u16_16 = backlight_level_params->backlight_pwm_u16_16;
+ unsigned int transition_time_millisec = backlight_level_params->transition_time_in_ms;
+
+ if (core_power == NULL)
+ return false;
+
+ core_power->bl_state[inst].backlight_pwm = backlight_pwm_u16_16;
+
+ if (stream == NULL)
+ return true;
+
+ if (stream->link->connector_signal != SIGNAL_TYPE_EDP)
+ return false;
+
+ if (transition_time_millisec != 0) {
+ unsigned int v_total =
+ (stream->adjust.v_total_max == 0) ? stream->timing.v_total : stream->adjust.v_total_max;
+
+ vsync_rate_hz = (unsigned int)div_u64(div_u64((stream->
+ timing.pix_clk_100hz * 100),
+ v_total),
+ stream->timing.h_total);
+
+ if (core_power->bl_state[inst].smooth_brightness_enabled)
+ frame_ramp = ((vsync_rate_hz *
+ transition_time_millisec) + 500) / 1000;
+ }
+
+ core_power->bl_state[inst].frame_ramp = frame_ramp;
+ params.u32All = 0;
+ params.bits.gradual_change = (frame_ramp > 0);
+ params.bits.frame_ramp = frame_ramp;
+ link = dc_stream_get_link(stream);
+
+ mod_power_set_psr_event(&core_power->public, stream, true, psr_event_hw_programming, true);
+ mod_power_set_replay_event(&core_power->public, stream, true, replay_event_hw_programming, true);
+
+ backlight_level_params->frame_ramp = params.u32All;
+ retv = dc_link_set_backlight_level(link, backlight_level_params);
+
+ mod_power_set_psr_event(&core_power->public, stream, false, psr_event_hw_programming, false);
+ mod_power_set_replay_event(&core_power->public, stream, false, replay_event_hw_programming, false);
+
+ return retv;
+}
+
+static void fill_backlight_level_params(struct core_power *core_power,
+ struct set_backlight_level_params *backlight_level_params,
+ int panel_inst, uint8_t aux_inst, unsigned int backlight_pwm,
+ enum backlight_control_type backlight_control_type,
+ unsigned int backlight_millinit, unsigned int transition_time_millisec,
+ bool is_hdr)
+{
+ struct backlight_properties *bl_prop = &core_power->bl_prop[panel_inst];
+
+ backlight_level_params->aux_inst = aux_inst;
+ backlight_level_params->backlight_pwm_u16_16 = backlight_pwm;
+ backlight_level_params->control_type = backlight_control_type;
+ backlight_level_params->backlight_millinits = backlight_millinit;
+ backlight_level_params->transition_time_in_ms = transition_time_millisec;
+ backlight_level_params->min_luminance = bl_prop->min_brightness_millinits;
+ backlight_level_params->max_luminance = bl_prop->max_brightness_millinits;
+ backlight_level_params->min_backlight_pwm = bl_prop->min_backlight_pwm;
+ backlight_level_params->max_backlight_pwm = bl_prop->max_backlight_pwm;
+
+ if (backlight_control_type == BACKLIGHT_CONTROL_AMD_AUX && !is_hdr)
+ backlight_level_params->control_type = BACKLIGHT_CONTROL_PWM;
+}
+
+bool mod_power_set_backlight_nits(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millinit,
+ unsigned int transition_time_millisec,
+ bool skip_aux,
+ bool is_hdr)
+{
+ struct core_power *core_power = NULL;
+ unsigned int backlight_pwm;
+ unsigned int panel_inst = 0;
+ struct set_backlight_level_params backlight_level_params = { 0 };
+ const struct dc_link *link = NULL;
+ uint8_t aux_inst = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ link = dc_stream_get_link(stream);
+
+ ASSERT(link->ddc->ddc_pin->hw_info.ddc_channel <= 0xFF);
+ aux_inst = (uint8_t)link->ddc->ddc_pin->hw_info.ddc_channel;
+
+ if (!dc_get_edp_link_panel_inst(core_power->dc, stream->link, &panel_inst))
+ return false;
+
+ if (!skip_aux) {
+ if (!set_backlight_millinits_aux(core_power, stream,
+ backlight_millinit, transition_time_millisec, panel_inst))
+ return false;
+ }
+// always send both AUX (above) and PWM (below)
+ core_power->bl_state[panel_inst].backlight_millinit = backlight_millinit;
+
+ core_power->bl_state[panel_inst].backlight_millipercent =
+ backlight_millinit_to_millipercent(
+ core_power, backlight_millinit, panel_inst);
+
+ backlight_pwm = backlight_millinit_to_pwm(
+ core_power, backlight_millinit, panel_inst);
+
+ fill_backlight_level_params(core_power, &backlight_level_params, panel_inst, aux_inst, backlight_pwm,
+ link->backlight_control_type, backlight_millinit, transition_time_millisec, is_hdr);
+
+ return set_backlight(core_power, stream,
+ &backlight_level_params, panel_inst);
+}
+
+
+bool mod_power_backlight_percent_to_nits(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millipercent,
+ unsigned int *backlight_millinit)
+{
+ struct core_power *core_power = NULL;
+ unsigned int inst = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (!dc_get_edp_link_panel_inst(core_power->dc, stream->link, &inst))
+ return false;
+
+ *backlight_millinit = backlight_millipercent_to_millinit(
+ core_power, backlight_millipercent, inst);
+ return true;
+}
+
+bool mod_power_backlight_nits_to_percent(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millinit,
+ unsigned int *backlight_millipercent)
+{
+ struct core_power *core_power = NULL;
+ unsigned int inst = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (!dc_get_edp_link_panel_inst(core_power->dc, stream->link, &inst))
+ return false;
+
+ *backlight_millipercent = backlight_millinit_to_millipercent(
+ core_power, backlight_millinit, inst);
+ return true;
+}
+
+bool mod_power_set_backlight_percent(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millipercent,
+ unsigned int transition_time_millisec,
+ bool is_hdr)
+{
+ struct core_power *core_power = NULL;
+ struct set_backlight_level_params backlight_level_params = { 0 };
+ const struct dc_link *link = NULL;
+ unsigned int backlight_pwm;
+ unsigned int panel_inst = 0;
+ uint8_t aux_inst = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ link = dc_stream_get_link(stream);
+ ASSERT(link->ddc->ddc_pin->hw_info.ddc_channel <= 0xFF);
+ aux_inst = (uint8_t)link->ddc->ddc_pin->hw_info.ddc_channel;
+
+ if (!dc_get_edp_link_panel_inst(core_power->dc, stream->link, &panel_inst))
+ return false;
+ core_power->bl_state[panel_inst].backlight_millipercent = backlight_millipercent;
+
+ core_power->bl_state[panel_inst].backlight_millinit =
+ backlight_millipercent_to_millinit(
+ core_power, backlight_millipercent, panel_inst);
+
+ backlight_pwm = backlight_millipercent_to_pwm(
+ core_power, backlight_millipercent, panel_inst);
+
+ fill_backlight_level_params(core_power, &backlight_level_params, panel_inst,
+ aux_inst, backlight_pwm, link->backlight_control_type,
+ core_power->bl_state[panel_inst].backlight_millinit, transition_time_millisec, is_hdr);
+
+ return set_backlight(core_power, stream,
+ &backlight_level_params, panel_inst);
+}
+
+void mod_power_update_backlight(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millipercent)
+{
+ struct core_power *core_power = NULL;
+ unsigned int inst = 0;
+
+ if (mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (!dc_get_edp_link_panel_inst(core_power->dc, stream->link, &inst))
+ return;
+ core_power->bl_state[inst].backlight_millipercent = backlight_millipercent;
+
+ core_power->bl_state[inst].backlight_millinit =
+ backlight_millipercent_to_millinit(
+ core_power, backlight_millipercent, inst);
+
+ core_power->bl_state[inst].backlight_pwm = backlight_millipercent_to_pwm(
+ core_power, backlight_millipercent, inst);
+}
+
+void mod_power_update_backlight_nits(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int backlight_millinit)
+{
+ struct core_power *core_power = NULL;
+ unsigned int inst = 0;
+
+ if (mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (!dc_get_edp_link_panel_inst(core_power->dc, stream->link, &inst))
+ return;
+
+ core_power->bl_state[inst].backlight_millinit = backlight_millinit;
+
+ core_power->bl_state[inst].backlight_millipercent = backlight_millinit_to_millipercent(
+ core_power, backlight_millinit, inst);
+ core_power->bl_state[inst].backlight_pwm = backlight_millinit_to_pwm(
+ core_power, backlight_millinit, inst);
+}
+
+bool mod_power_get_backlight_pwm(struct mod_power *mod_power,
+ unsigned int *backlight_pwm,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *backlight_pwm = core_power->bl_state[inst].backlight_pwm;
+
+ return true;
+}
+
+bool mod_power_get_backlight_nits(struct mod_power *mod_power,
+ unsigned int *backlight_millinit,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *backlight_millinit = core_power->bl_state[inst].backlight_millinit;
+
+ return true;
+}
+
+bool mod_power_get_backlight_percent(struct mod_power *mod_power,
+ unsigned int *backlight_millipercent,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *backlight_millipercent = core_power->bl_state[inst].backlight_millipercent;
+
+ return true;
+}
+
+bool mod_power_get_hw_target_backlight_pwm_nits(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millinit,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+ unsigned int backlight_u16_16 = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (mod_power_get_hw_target_backlight_pwm(mod_power, link,
+ &backlight_u16_16)) {
+ *backlight_millinit =
+ backlight_pwm_to_millinit(core_power,
+ backlight_u16_16, inst);
+ return true;
+ }
+ return false;
+}
+
+bool mod_power_get_hw_target_backlight_pwm_percent(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millipercent,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+ unsigned int backlight_u16_16 = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (mod_power_get_hw_target_backlight_pwm(mod_power, link,
+ &backlight_u16_16)) {
+ *backlight_millipercent =
+ backlight_pwm_to_millipercent(core_power,
+ backlight_u16_16, inst);
+ return true;
+ }
+ return false;
+}
+
+bool mod_power_get_hw_target_backlight_pwm(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_u16_16)
+{
+ if (mod_power == NULL)
+ return false;
+
+ *backlight_u16_16 = dc_link_get_target_backlight_pwm(link);
+
+ return true;
+}
+
+bool mod_power_get_hw_backlight_pwm_nits(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millinit,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+ unsigned int backlight_u16_16 = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (mod_power_get_hw_backlight_pwm(mod_power, link, &backlight_u16_16)) {
+ *backlight_millinit =
+ backlight_pwm_to_millinit(core_power,
+ backlight_u16_16, inst);
+ return true;
+ }
+ return false;
+}
+
+bool mod_power_get_hw_backlight_aux_nits(struct mod_power *mod_power,
+ struct dc_stream_state **streams, int num_streams,
+ unsigned int *backlight_millinit_avg,
+ unsigned int *backlight_millinit_peak)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power == NULL)
+ return false;
+
+ if (num_streams < 1)
+ return true;
+
+ for (stream_index = 0; stream_index < num_streams; stream_index++)
+ if (streams[stream_index]->link->connector_signal == SIGNAL_TYPE_EDP ||
+ streams[stream_index]->link->connector_signal == SIGNAL_TYPE_DISPLAY_PORT)
+ break;
+
+ if (stream_index == num_streams)
+ return false;
+
+ link = dc_stream_get_link(streams[stream_index]);
+ if (link->dpcd_sink_ext_caps.bits.hdr_aux_backlight_control == 0)
+ return false;
+
+ return dc_link_get_backlight_level_nits(link, backlight_millinit_avg,
+ backlight_millinit_peak);
+}
+
+bool mod_power_get_hw_backlight_pwm_percent(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_millipercent,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+ unsigned int backlight_u16_16 = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (mod_power_get_hw_backlight_pwm(mod_power, link, &backlight_u16_16)) {
+ *backlight_millipercent =
+ backlight_pwm_to_millipercent(core_power,
+ backlight_u16_16, inst);
+ return true;
+ }
+ return false;
+}
+
+bool mod_power_get_hw_backlight_pwm(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int *backlight_u16_16)
+{
+ if (mod_power == NULL)
+ return false;
+
+ *backlight_u16_16 = dc_link_get_backlight_level(link);
+
+ return true;
+}
+
+bool mod_power_get_panel_backlight_boundaries(
+ struct mod_power *mod_power,
+ unsigned int *out_min_backlight,
+ unsigned int *out_max_backlight,
+ unsigned int *out_ac_backlight_percent,
+ unsigned int *out_dc_backlight_percent,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ /* If cache was successfully updated,
+ * copy the values to output structure and return success
+ */
+ if (core_power->bl_prop[inst].backlight_caps_valid) {
+ *out_min_backlight = core_power->bl_prop[inst].backlight_lut[0];
+ *out_max_backlight =
+ core_power->bl_prop[inst].backlight_lut[
+ core_power->bl_prop[inst].num_backlight_levels - 1];
+ *out_ac_backlight_percent =
+ core_power->bl_prop[inst].ac_backlight_percent;
+ *out_dc_backlight_percent =
+ core_power->bl_prop[inst].dc_backlight_percent;
+
+ return true;
+ }
+
+ return false;
+}
+
+bool mod_power_set_smooth_brightness(struct mod_power *mod_power,
+ bool enable_brightness,
+ unsigned int inst)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ core_power->bl_state[inst].smooth_brightness_enabled = enable_brightness;
+
+ return true;
+}
+
+bool mod_power_notify_mode_change(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ bool is_hdr)
+{
+ unsigned int stream_index = 0;
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ struct psr_config psr_config = {0};
+ struct psr_context psr_context = {0};
+ struct dc *dc = NULL;
+ unsigned int panel_inst = 0;
+ int active_psr_events = 0;
+ int active_replay_events = 0;
+
+ if ((mod_power == NULL) || (stream == NULL))
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ if (stream_index >= core_power->num_entities)
+ return false;
+
+ dc = core_power->dc;
+ link = dc_stream_get_link(stream);
+ active_psr_events = core_power->map[stream_index].psr_events;
+ active_replay_events = core_power->map[stream_index].replay_events;
+ if (link != NULL && dc_get_edp_link_panel_inst(dc, link, &panel_inst)) {
+ struct set_backlight_level_params backlight_level_params = { 0 };
+
+ ASSERT(link->ddc->ddc_pin->hw_info.ddc_channel <= 0xFF);
+ uint8_t aux_inst = (uint8_t)link->ddc->ddc_pin->hw_info.ddc_channel;
+
+ if (link->dpcd_sink_ext_caps.bits.hdr_aux_backlight_control == 1 ||
+ link->dpcd_sink_ext_caps.bits.sdr_aux_backlight_control == 1)
+ dc_link_set_backlight_level_nits(link, core_power->bl_state[panel_inst].isHDR,
+ core_power->bl_state[panel_inst].backlight_millinit, 0);
+
+ backlight_level_params.frame_ramp = 0;
+
+ fill_backlight_level_params(core_power, &backlight_level_params, panel_inst, aux_inst,
+ core_power->bl_state[panel_inst].backlight_pwm, link->backlight_control_type,
+ core_power->bl_state[panel_inst].backlight_millinit, 0, is_hdr);
+
+ dc_link_set_backlight_level(link, &backlight_level_params);
+
+ mod_power_calc_psr_configs(&psr_config, link, stream);
+
+ psr_config.psr_exit_link_training_required = core_power->map[stream_index].caps->psr_exit_link_training_required;
+
+ if (dc->ctx->asic_id.chip_family >= AMDGPU_FAMILY_GC_11_0_1)
+ psr_config.allow_smu_optimizations =
+ core_power->psr_smu_optimizations_support && dc_is_embedded_signal(stream->signal);
+ else
+ psr_config.allow_smu_optimizations =
+ core_power->psr_smu_optimizations_support && mod_power_only_edp(dc->current_state, stream);
+
+ psr_config.allow_multi_disp_optimizations = core_power->multi_disp_optimizations_support;
+
+ psr_config.rate_control_caps = core_power->map[stream_index].caps->rate_control_caps;
+
+ if (active_psr_events & psr_event_os_request_force_ffu) {
+ psr_config.os_request_force_ffu = true;
+ }
+ /*
+ * DSC support:
+ * DSC slice height value must be 'mod' by su_y_granularity.
+ * According to Panel Vendor, there might be varied conditions to fulfill.
+ * Right now, DSC slice height value must be multiple of su_y_granularity.
+ *
+ * The value of DSC slice height is determined in DSC Driver but it does not
+ * propagated out here, so we need to calculate it as below 'slice_height'.
+ */
+ psr_su_set_dsc_slice_height(dc, link,
+ (struct dc_stream_state *) stream,
+ &psr_config);
+
+ dc_link_setup_psr(link, stream, &psr_config, &psr_context);
+
+ link->replay_settings.replay_smu_opt_enable =
+ (link->replay_settings.config.replay_smu_opt_supported &&
+ mod_power_only_edp(dc->current_state, stream));
+
+ if (active_replay_events & replay_event_os_request_force_ffu) {
+ link->replay_settings.config.os_request_force_ffu = true;
+ }
+
+ if (dc_is_embedded_signal(stream->signal))
+ dc->link_srv->dp_setup_replay(link, stream);
+ }
+
+ return true;
+}
+
+bool mod_power_varibright_feature_enable(struct mod_power *mod_power, bool enable,
+ struct dc_stream_update *stream_update)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ core_power->varibright_prop.varibright_user_enable = enable;
+
+ /* find abm hw level to program, and save in stream update */
+ varibright_set_level(core_power);
+ *stream_update->abm_level = core_power->varibright_prop.varibright_hw_level;
+
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">ABM feature enable: enable=%u su->varibright_level=%u varibright_hw_level=%u",
+ (unsigned int) enable,
+ *stream_update->abm_level,
+ core_power->varibright_prop.varibright_hw_level);
+ return true;
+}
+
+bool mod_power_varibright_activate(struct mod_power *mod_power,
+ bool activate,
+ struct dc_stream_update *stream_update)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ core_power->varibright_prop.varibright_active = activate;
+
+ /* find abm hw level to program, and save in stream update */
+ varibright_set_level(core_power);
+ *stream_update->abm_level = core_power->varibright_prop.varibright_hw_level;
+
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">ABM activate: activate=%u su->varibright_level=%u",
+ (unsigned int) activate,
+ *stream_update->abm_level);
+ return true;
+}
+bool mod_power_varibright_set_level(struct mod_power *mod_power, unsigned int level,
+ struct dc_stream_update *stream_update)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ core_power->varibright_prop.varibright_level = level;
+ core_power->varibright_prop.varibright_hw_level = level;
+
+ /* find abm hw level to program, and save in stream update */
+ varibright_set_level(core_power);
+ *stream_update->abm_level = core_power->varibright_prop.varibright_hw_level;
+
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">ABM set level: level=%u -> (varibright_level=%u varibright_hw_level=%u) -> su->varibright_level=%u",
+ level,
+ core_power->varibright_prop.varibright_level,
+ core_power->varibright_prop.varibright_hw_level,
+ *stream_update->abm_level);
+ return true;
+}
+
+bool mod_power_varibright_set_hw_level(struct mod_power *mod_power, unsigned int level,
+ struct dc_stream_update *stream_update)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (level == 0 || level == ABM_LEVEL_IMMEDIATE_DISABLE)
+ core_power->varibright_prop.varibright_active = 0;
+ else
+ core_power->varibright_prop.varibright_active = 1;
+ core_power->varibright_prop.varibright_hw_level = level;
+ *stream_update->abm_level = core_power->varibright_prop.varibright_hw_level;
+
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">ABM set level: level=%u -> (varibright_level=%u varibright_hw_level=%u) -> su->varibright_level=%u",
+ level,
+ core_power->varibright_prop.varibright_level,
+ core_power->varibright_prop.varibright_hw_level,
+ *stream_update->abm_level);
+ return true;
+}
+
+bool mod_power_get_varibright_level(struct mod_power *mod_power,
+ unsigned int *varibright_level)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *varibright_level = core_power->varibright_prop.varibright_level;
+
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">get varibright level: cp->varibright_level=%u",
+ *varibright_level);
+ return true;
+
+}
+
+bool mod_power_get_varibright_hw_level(struct mod_power *mod_power,
+ unsigned int *varibright_level)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *varibright_level = core_power->varibright_prop.varibright_hw_level;
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">get varibright HW level: hw_level=%u",
+ *varibright_level);
+ return true;
+}
+
+bool mod_power_get_varibright_default_level(struct mod_power *mod_power,
+ unsigned int *varibright_level)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *varibright_level = core_power->varibright_prop.def_varibright_level;
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">get varibright default level: def_varibright_level=%u",
+ *varibright_level);
+ return true;
+}
+
+bool mod_power_get_varibright_enable(struct mod_power *mod_power,
+ bool *varibright_enable)
+{
+ struct core_power *core_power = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ *varibright_enable = core_power->varibright_prop.varibright_user_enable;
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">get varibright enable state: varibright_user_enable=%u",
+ (unsigned int) (*varibright_enable));
+ return true;
+}
+
+bool mod_power_is_abm_active(struct mod_power *mod_power,
+ const struct dc_link *link,
+ unsigned int inst)
+{
+ unsigned int user_backlight = 0;
+ unsigned int current_backlight = 0;
+ bool is_active = false;
+
+ if (mod_power == NULL)
+ return false;
+
+ mod_power_get_backlight_pwm(mod_power, &user_backlight, inst);
+ mod_power_get_hw_backlight_pwm(mod_power, link, ¤t_backlight);
+
+ if (user_backlight != current_backlight)
+ is_active = true;
+ else
+ is_active = false;
+ DC_TRACE_LEVEL_MESSAGEP(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Backlight_ABM,
+ ">get ABM active state: is_active=%u (user_backlight_pwm=%u, current_backlight_pwm=%u)",
+ (unsigned int)is_active,
+ user_backlight,
+ current_backlight);
+ return is_active;
+}
+
+
+static void mod_power_psr_set_power_opt(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int active_psr_events,
+ bool psr_enable_request)
+{
+ (void)psr_enable_request;
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index = 0;
+ unsigned int power_opt = 0;
+
+ if (!stream)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ stream_index = map_index_from_stream(core_power, stream);
+ if (!core_power->map[stream_index].caps->psr_version)
+ return;
+
+ link = dc_stream_get_link(stream);
+
+ if (active_psr_events == 0) {
+ /* Static Screen */
+ power_opt |= (psr_power_opt_smu_opt_static_screen | psr_power_opt_z10_static_screen |
+ psr_power_opt_ds_disable_allow);
+ }
+
+ /* psr_power_opt_flag is a configuration parameter into the module that determines
+ * which optimizations to enable during psr
+ */
+ power_opt &= core_power->map[stream_index].caps->psr_power_opt_flag;
+ if (core_power->map[stream_index].psr_power_opt != power_opt) {
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_VERBOSE,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power set_power_opt: psr_power_opt=0x%04x, power_opt=0x%04x"
+ "active_psr_events=0x%04x, psr_power_opt_flag=0x%04x",
+ core_power->map[stream_index].psr_power_opt,
+ power_opt,
+ active_psr_events,
+ core_power->map[stream_index].caps->psr_power_opt_flag);
+ dc_link_set_psr_allow_active(link, NULL, false, false, &power_opt);
+ core_power->map[stream_index].psr_power_opt = power_opt;
+ }
+}
+
+static bool set_psr_enable(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ bool psr_enable,
+ bool wait,
+ bool force_static)
+{
+ struct core_power *core_power = NULL;
+ enum dc_psr_state state = PSR_STATE0;
+ unsigned int retry_count;
+ const unsigned int max_retry = 1000;
+ struct dc_link *link = NULL;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0) {
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: ERROR: stream=%p num_entities=%d",
+ stream,
+ core_power->num_entities);
+ return false;
+ }
+
+ if (psr_enable) {
+ unsigned int vsync_rate_hz;
+ struct dc_static_screen_params params = {0};
+
+ vsync_rate_hz = (unsigned int)div_u64(div_u64((
+ stream->timing.pix_clk_100hz * 100),
+ stream->timing.v_total),
+ stream->timing.h_total);
+
+ params.triggers.cursor_update = true;
+ params.triggers.overlay_update = true;
+ params.triggers.surface_update = true;
+ params.num_frames = calc_psr_num_static_frames(vsync_rate_hz);
+
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: CALCS: pix_clk_100hz=%u v_total=%u h_total=%u vsync_rate_hz=%u num_frames=%u",
+ stream->timing.pix_clk_100hz,
+ stream->timing.v_total,
+ stream->timing.h_total,
+ vsync_rate_hz,
+ params.num_frames);
+
+ dc_stream_set_static_screen_params(core_power->dc,
+ &stream, 1,
+ ¶ms);
+ }
+
+ link = dc_stream_get_link(stream);
+
+ if (!dc_link_set_psr_allow_active(link, &psr_enable, false, force_static, NULL)) {
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: ERROR: stream=%p link=%p psr_enable=%d",
+ stream,
+ link,
+ psr_enable);
+ return false;
+ }
+
+ if (wait == true) {
+
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: BEGIN WAIT: psr_enable=%d",
+ (int)psr_enable);
+
+ for (retry_count = 0; retry_count <= max_retry; retry_count++) {
+ dc_link_get_psr_state(link, &state);
+ if (psr_enable) {
+ if (state != PSR_STATE0 &&
+ (!force_static || state == PSR_STATE3))
+ break;
+ } else {
+ if (state == PSR_STATE0)
+ break;
+ }
+ udelay(500);
+ }
+
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: END WAIT: psr_enable=%d",
+ (int)psr_enable);
+
+ /* assert if max retry hit */
+ if (retry_count >= max_retry) {
+ ASSERT(0);
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: ERROR: retry_count=%u: Unexpectedly long wait for PSR state change.",
+ retry_count);
+ }
+ } else {
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_INFORMATION,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "set psr enable: PSR state change initiated (wait=false): psr_enable=%d",
+ (int)psr_enable);
+ }
+
+ return true;
+}
+
+bool mod_power_get_psr_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int *active_psr_events)
+{
+ struct core_power *core_power = NULL;
+ unsigned int stream_index = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ if (!core_power->map[stream_index].caps->psr_version)
+ return false;
+
+ *active_psr_events = core_power->map[stream_index].psr_events;
+
+ return true;
+}
+
+bool mod_power_set_psr_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream, bool set_event,
+ enum psr_event event, bool wait)
+{
+ struct core_power *core_power = NULL;
+ unsigned int stream_index = 0;
+ unsigned int active_psr_events = 0;
+ bool psr_enable_request = false;
+ bool force_static = false;
+
+ if (mod_power == NULL || stream == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ stream_index = map_index_from_stream(core_power, stream);
+
+ if (core_power->num_entities == 0) {
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_ERROR,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power set_psr_event: ERROR: stream=%p event=%d num_entities=%d",
+ stream,
+ (int)event,
+ core_power->num_entities);
+ return false;
+ }
+
+ if (!core_power->map[stream_index].caps->psr_version)
+ return false;
+
+ if (set_event)
+ core_power->map[stream_index].psr_events |= event;
+ else
+ core_power->map[stream_index].psr_events &= ~event;
+
+ active_psr_events = core_power->map[stream_index].psr_events;
+
+ // ignore other events when we're in forced psr enabled state
+ if (active_psr_events & psr_event_dynamic_display_switch &&
+ event != psr_event_dynamic_display_switch)
+ return false;
+
+ // ignore other events when we're in forced psr enabled state
+ if (active_psr_events & psr_event_os_override_hold &&
+ event != psr_event_os_override_hold)
+ return false;
+
+ // ignore other events when we're in forced psr enabled state
+ // dds events need to be processed while in dynamic_link_rate_control
+ if (active_psr_events & psr_event_dynamic_link_rate_control &&
+ event != psr_event_dynamic_link_rate_control &&
+ event != psr_event_dds_defer_stream_enable &&
+ event != psr_event_dynamic_display_switch)
+ return false;
+
+ if (active_psr_events & (psr_event_test_harness_disable_psr | psr_event_os_request_disable))
+ psr_enable_request = false;
+ else if (active_psr_events & psr_event_pause)
+ psr_enable_request = false;
+ else if (active_psr_events & psr_event_test_harness_enable_psr)
+ psr_enable_request = true;
+ else if (active_psr_events & psr_event_dynamic_display_switch) {
+ psr_enable_request = true;
+ force_static = true;
+ } else if (active_psr_events & psr_event_dynamic_link_rate_control) {
+ psr_enable_request = true;
+ force_static = true;
+ } else if (active_psr_events & psr_event_edp_panel_off_disable_psr)
+ psr_enable_request = false;
+ else if (active_psr_events & (psr_event_hw_programming |
+ psr_event_defer_enable |
+ psr_event_dds_defer_stream_enable |
+ psr_event_vrr_transition |
+ psr_event_immediate_flip))
+ psr_enable_request = false;
+ else if (active_psr_events & psr_event_big_screen_video)
+ psr_enable_request = true;
+ else if (active_psr_events & psr_event_full_screen)
+ psr_enable_request = false;
+ else if (active_psr_events & psr_event_mpo_video_selective_update)
+ psr_enable_request = true;
+ else if (active_psr_events & psr_event_vsync)
+ psr_enable_request = false;
+ else if (active_psr_events & psr_event_crc_window_active)
+ psr_enable_request = false;
+ else
+ psr_enable_request = true;
+
+ DC_TRACE_LEVEL_MESSAGE(DAL_TRACE_LEVEL_VERBOSE,
+ WPP_BIT_FLAG_Firmware_PsrState,
+ "mod_power set_psr_event: before: psr_enabled=%d -> request: set_event=%d event=0x%04x -> result: psr_events=0x%04x psr_enable_request=%d",
+ (int)core_power->map[stream_index].psr_enabled,
+ (int)set_event,
+ (unsigned int)event,
+ (unsigned int)core_power->map[stream_index].psr_events,
+ (int)psr_enable_request);
+ mod_power_psr_set_power_opt(mod_power, stream, active_psr_events, psr_enable_request);
+
+ if (core_power->map[stream_index].psr_enabled != psr_enable_request || force_static) {
+ if (set_psr_enable(mod_power, stream, psr_enable_request, wait, force_static)) {
+ core_power->map[stream_index].psr_enabled = psr_enable_request;
+ }
+ }
+
+ return true;
+}
+
+bool mod_power_get_psr_state(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ enum dc_psr_state *state)
+{
+ struct core_power *core_power = NULL;
+ const struct dc_link *link = NULL;
+
+ if (!stream)
+ return false;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ link = dc_stream_get_link(stream);
+ return dc_link_get_psr_state(link, state);
+}
+
+bool mod_power_get_psr_enabled_status(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ bool *psr_enabled)
+{
+ struct core_power *core_power = NULL;
+ unsigned int stream_index = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ if (!core_power->map[stream_index].caps->psr_version)
+ return false;
+
+ *psr_enabled = core_power->map[stream_index].psr_enabled;
+
+ return true;
+}
+
+void mod_power_psr_residency(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ unsigned int *residency,
+ const uint8_t mode)
+{
+ struct core_power *core_power = NULL;
+ const struct dc_link *link = NULL;
+
+ if (!stream)
+ return;
+
+ if (mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return;
+
+ link = dc_stream_get_link(stream);
+
+ if (link != NULL)
+ link->dc->link_srv->edp_get_psr_residency(link, residency, mode);
+}
+bool mod_power_psr_get_active_psr_events(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, unsigned int *active_psr_events)
+{
+ struct core_power *core_power = NULL;
+ unsigned int stream_index = 0;
+
+ if (!stream)
+ return false;
+
+ if (mod_power == NULL)
+ return false;
+
+ if (active_psr_events == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ *active_psr_events = core_power->map[stream_index].psr_events;
+ return true;
+}
+
+bool mod_power_psr_set_sink_vtotal_in_psr_active(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ uint16_t psr_vtotal_idle,
+ uint16_t psr_vtotal_su)
+{
+ struct core_power *core_power = NULL;
+ unsigned int stream_index = 0;
+ const struct dc_link *link = NULL;
+
+ if (!stream)
+ return false;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ if (!core_power->map[stream_index].caps->psr_version)
+ return false;
+
+ link = dc_stream_get_link(stream);
+
+ return link->dc->link_srv->edp_set_sink_vtotal_in_psr_active(
+ link, psr_vtotal_idle, psr_vtotal_su);
+}
+
+static bool mod_power_set_replay_active(struct dc_stream_state *stream,
+ bool replay_active,
+ bool wait,
+ bool force_static)
+{
+ uint64_t state;
+ unsigned int retry_count;
+ const unsigned int max_retry = 1000;
+ struct dc_link *link = NULL;
+
+ if (!stream)
+ return false;
+
+ link = dc_stream_get_link(stream);
+
+ if (!link)
+ return false;
+
+ if (!dc_link_set_replay_allow_active(link, &replay_active, false, force_static, NULL))
+ return false;
+
+ if (wait == true) {
+
+ for (retry_count = 0; retry_count <= max_retry; retry_count++) {
+ dc_link_get_replay_state(link, &state);
+ if (replay_active) {
+ if (state != REPLAY_STATE_0 &&
+ (!force_static || state == REPLAY_STATE_3))
+ break;
+ } else {
+ if (state == REPLAY_STATE_0)
+ break;
+ }
+ udelay(500);
+ }
+
+ /* assert if max retry hit */
+ if (retry_count >= max_retry)
+ ASSERT(0);
+ } else {
+ /* To-do: Add trace log */
+ }
+
+ return true;
+}
+
+static unsigned int mod_power_replay_setup_power_opt(struct dc_link *link,
+ unsigned int active_replay_events, bool is_ultra_sleep_mode)
+{
+ unsigned int power_opt = 0;
+
+ if (is_ultra_sleep_mode) {
+ /* Static Screen */
+ power_opt |= (replay_power_opt_smu_opt_static_screen | replay_power_opt_z10_static_screen);
+ } else if (active_replay_events & replay_event_test_harness_ultra_sleep) {
+ power_opt |= replay_power_opt_z10_static_screen;
+ }
+
+ /* replay_power_opt_flag is a configuration parameter into the module that determines
+ * which optimizations to enable during replay
+ */
+ power_opt &= link->replay_settings.config.replay_power_opt_supported;
+
+ return power_opt;
+}
+
+static bool mod_power_replay_set_power_opt(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int active_replay_events,
+ bool is_ultra_sleep_mode)
+{
+ (void)mod_power;
+ struct dc_link *link = NULL;
+ unsigned int power_opt = 0;
+
+ if (!stream)
+ return false;
+
+ link = dc_stream_get_link(stream);
+
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return false;
+
+ power_opt = mod_power_replay_setup_power_opt(link, active_replay_events, is_ultra_sleep_mode);
+
+ if (!dc_link_set_replay_allow_active(link, NULL, false, false, &power_opt))
+ return false;
+
+ return true;
+}
+
+bool mod_power_get_replay_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream,
+ unsigned int *active_replay_events)
+{
+ struct core_power *core_power = NULL;
+ unsigned int stream_index = 0;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ *active_replay_events = core_power->map[stream_index].replay_events;
+
+ return true;
+}
+
+static bool mod_power_update_replay_active_status(unsigned int active_replay_events,
+ struct dc_link *link, uint32_t *coasting_vtotal, bool *is_full_screen_video, bool *is_ultra_sleep_mode, uint16_t *frame_skip_number, bool *is_video_playback)
+{
+ if (!link || !coasting_vtotal || !is_full_screen_video || !is_video_playback)
+ return false;
+
+ // Check coasting_vtotal_table has been updated.
+ if (!link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_STATIC]
+ || !link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_NOM])
+ return false;
+
+ unsigned int replay_enable_option =
+ link->replay_settings.config.replay_enable_option;
+
+ /* TODO: To support test harness and DDS event */
+
+ *coasting_vtotal = link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_NOM];
+ ASSERT(link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_NOM] <= 0xFFFF);
+ *frame_skip_number = (uint16_t)link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_NOM];
+
+ link->replay_settings.config.replay_timing_sync_supported = false;
+
+ *is_full_screen_video = false;
+
+ *is_ultra_sleep_mode = false;
+
+ *is_video_playback = false;
+
+ /* DSAT test scenario */
+ if (active_replay_events & replay_event_test_harness_mode) {
+ if (link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_TEST_HARNESS])
+ *coasting_vtotal =
+ link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_TEST_HARNESS];
+ if (link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS]) {
+ ASSERT(link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS] <= 0xFFFF);
+ *frame_skip_number =
+ (uint16_t)link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS];
+ }
+
+ /* During the ultra sleep mode testing, disable the timing sync in short vblank mode */
+ if (active_replay_events & (replay_event_test_harness_enable_replay)) {
+ if ((active_replay_events & replay_event_test_harness_ultra_sleep) &&
+ !link->replay_settings.config.replay_support_fast_resync_in_ultra_sleep_mode)
+ link->replay_settings.config.replay_timing_sync_supported = false;
+ return true;
+ } else
+ return false;
+ } else if (active_replay_events & (replay_event_test_harness_enable_replay)) {
+ if (link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_TEST_HARNESS])
+ *coasting_vtotal = link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_TEST_HARNESS];
+ if (link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS]) {
+ uint32_t frame_skip_val =
+ link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS];
+
+ ASSERT(frame_skip_val <= 0xFFFF);
+ *frame_skip_number = (uint16_t)frame_skip_val;
+ }
+
+ /* During the ultra sleep mode testing, disable the timing sync in short vblank mode */
+ if ((active_replay_events & replay_event_test_harness_ultra_sleep) &&
+ !link->replay_settings.config.replay_support_fast_resync_in_ultra_sleep_mode)
+ link->replay_settings.config.replay_timing_sync_supported = false;
+ return true;
+ } else if (active_replay_events & (replay_event_test_harness_disable_replay | replay_event_os_request_disable)) {
+ // set last set coasting vtotal
+ if (link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_TEST_HARNESS])
+ *coasting_vtotal = link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_TEST_HARNESS];
+ if (link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS]) {
+ uint32_t frame_skip_val =
+ link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_TEST_HARNESS];
+
+ ASSERT(frame_skip_val <= 0xFFFF);
+ *frame_skip_number = (uint16_t)frame_skip_val;
+ }
+ return false;
+ }
+
+ /* Inactive conditions */
+ if (active_replay_events & (replay_event_edp_panel_off_disable_psr |
+ replay_event_hw_programming |
+ replay_event_vrr |
+ replay_event_immediate_flip |
+ replay_event_prepare_vtotal |
+ replay_event_vrr_transition |
+ replay_event_pause |
+ replay_event_disable_replay_while_DPMS |
+ replay_event_sleep_resume |
+ replay_event_disable_in_AC |
+ replay_event_disable_replay_while_detect_display |
+ replay_event_infopacket |
+ replay_event_crc_window_active))
+ return false;
+
+ // Full screen scenario
+ if (active_replay_events & replay_event_full_screen) {
+ if (!(replay_enable_option & pr_enable_option_full_screen))
+ return false;
+ }
+
+ /* Full screen video scenario */
+ if (active_replay_events & replay_event_big_screen_video) {
+
+ link->replay_settings.config.replay_timing_sync_supported = false;
+
+ if (replay_enable_option & pr_enable_option_full_screen_video_coasting) {
+ unsigned int fsn_vid =
+ link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_FULL_SCREEN_VIDEO];
+
+ *coasting_vtotal =
+ link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_FULL_SCREEN_VIDEO];
+ ASSERT(fsn_vid <= 0xFFFF);
+ *frame_skip_number = (uint16_t)fsn_vid;
+ }
+
+ *is_video_playback = true;
+
+ if ((replay_enable_option & pr_enable_option_full_screen_video) &&
+ (replay_enable_option & pr_enable_option_full_screen_video_coasting)) {
+ *is_full_screen_video = true;
+ return true;
+ } else
+ return false;
+ }
+
+ /* MPO video scenario
+ * Some of the cases may contain a full screen UI layer in MPO video scenario which is
+ * not the expected case to enable Replay.
+ */
+ if ((active_replay_events & replay_event_mpo_video_selective_update) &&
+ !(active_replay_events & replay_event_full_screen)) {
+
+ link->replay_settings.config.replay_timing_sync_supported = false;
+
+ if (replay_enable_option & pr_enable_option_mpo_video_coasting) {
+ *coasting_vtotal = link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_NOM];
+ {
+ uint32_t frame_skip_val =
+ link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_NOM];
+
+ ASSERT(frame_skip_val <= 0xFFFF);
+ *frame_skip_number = (uint16_t)frame_skip_val;
+ }
+ }
+
+ *is_video_playback = true;
+
+ if (replay_enable_option & pr_enable_option_mpo_video)
+ return true;
+ else
+ return false;
+ }
+
+ /* Static screen scenario */
+ if (!(active_replay_events & replay_event_vsync)) {
+
+ if (replay_enable_option & pr_enable_option_static_screen_coasting) {
+ // Do not adjust eDP refresh rate if static screen + normal sleep mode
+ if ((!(link->replay_settings.config.replay_power_opt_supported &
+ replay_power_opt_z10_static_screen)) ||
+ (active_replay_events & replay_event_cursor_updating)) {
+ // normal sleep mode
+ *coasting_vtotal =
+ link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_NOM];
+ {
+ uint32_t frame_skip_val =
+ link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_NOM];
+
+ ASSERT(frame_skip_val <= 0xFFFF);
+ *frame_skip_number = (uint16_t)frame_skip_val;
+ }
+ } else {
+ // ultra sleep mode
+ *coasting_vtotal =
+ link->replay_settings.coasting_vtotal_table[PR_COASTING_TYPE_STATIC];
+ {
+ uint32_t frame_skip_val =
+ link->replay_settings.frame_skip_number_table[PR_COASTING_TYPE_STATIC];
+
+ ASSERT(frame_skip_val <= 0xFFFF);
+ *frame_skip_number = (uint16_t)frame_skip_val;
+ }
+ *is_ultra_sleep_mode = true;
+ }
+ }
+
+ if (replay_enable_option & pr_enable_option_static_screen) {
+ if (!link->replay_settings.config.replay_support_fast_resync_in_ultra_sleep_mode)
+ link->replay_settings.config.replay_timing_sync_supported = false;
+ return true;
+ } else
+ return false;
+ }
+
+ /* General UI scenario */
+ if (active_replay_events & replay_event_general_ui) {
+ if (replay_enable_option & pr_enable_option_general_ui)
+ return true;
+ else
+ return false;
+ }
+
+ return false;
+}
+
+bool mod_power_replay_set_coasting_vtotal(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ uint32_t coasting_vtotal,
+ uint16_t frame_skip_number)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+
+ if (!stream)
+ return false;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return false;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ return link->dc->link_srv->edp_set_coasting_vtotal(link, coasting_vtotal, frame_skip_number);
+}
+
+void mod_power_replay_set_timing_sync_supported(struct mod_power *mod_power,
+ const struct dc_stream_state *stream)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index = 0;
+ union dmub_replay_cmd_set cmd_data = { 0 };
+
+ if (!stream || mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ if (core_power->num_entities == 0)
+ return;
+
+ stream_index = map_index_from_stream(core_power, stream);
+ if (stream_index > core_power->num_entities) //invalid index
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ cmd_data.sync_data.timing_sync_supported = link->replay_settings.config.replay_timing_sync_supported;
+
+ link->dc->link_srv->edp_send_replay_cmd(link, Replay_Set_Timing_Sync_Supported,
+ &cmd_data);
+}
+
+void mod_power_replay_disabled_adaptive_sync_sdp(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool force_disabled)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index = 0;
+ union dmub_replay_cmd_set cmd_data = { 0 };
+
+ if (!stream || mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ if (core_power->num_entities == 0)
+ return;
+
+ stream_index = map_index_from_stream(core_power, stream);
+ if (stream_index > core_power->num_entities) //invalid index
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ cmd_data.disabled_adaptive_sync_sdp_data.force_disabled = force_disabled;
+
+ link->dc->link_srv->edp_send_replay_cmd(link, Replay_Disabled_Adaptive_Sync_SDP,
+ &cmd_data);
+}
+
+static void mod_power_replay_set_general_cmd(struct mod_power *mod_power,
+ const struct dc_stream_state *stream,
+ const enum dmub_cmd_replay_general_subtype general_cmd_type,
+ const uint32_t param1, const uint32_t param2)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index = 0;
+ union dmub_replay_cmd_set cmd_data = { 0 };
+
+ if (!stream || mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ if (core_power->num_entities == 0)
+ return;
+
+ stream_index = map_index_from_stream(core_power, stream);
+ if (stream_index > core_power->num_entities) //invalid index
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ cmd_data.set_general_cmd_data.subtype = general_cmd_type;
+ cmd_data.set_general_cmd_data.param1 = param1;
+ cmd_data.set_general_cmd_data.param2 = param2;
+ link->dc->link_srv->edp_send_replay_cmd(link, Replay_Set_General_Cmd,
+ &cmd_data);
+}
+
+void mod_power_replay_disabled_desync_error_detection(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool force_disabled)
+{
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_DISABLED_DESYNC_ERROR_DETECTION,
+ force_disabled, 0);
+}
+
+static void mod_power_replay_set_pseudo_vtotal(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, uint16_t vtotal)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index = 0;
+ union dmub_replay_cmd_set cmd_data = { 0 };
+
+ if (!stream || mod_power == NULL)
+ return;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+ if (core_power->num_entities == 0)
+ return;
+
+ stream_index = map_index_from_stream(core_power, stream);
+ if (stream_index > core_power->num_entities) //invalid index
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ cmd_data.pseudo_vtotal_data.vtotal = vtotal;
+
+ if (link->replay_settings.last_pseudo_vtotal != vtotal) {
+ link->replay_settings.last_pseudo_vtotal = vtotal;
+ link->dc->link_srv->edp_send_replay_cmd(link, Replay_Set_Pseudo_VTotal, &cmd_data);
+ }
+}
+
+static void mod_power_update_error_status(struct mod_power *mod_power,
+ const struct dc_stream_state *stream)
+{
+ struct dc_link *link = NULL;
+ union replay_debug_flags *pDebug = NULL;
+
+ if (mod_power == NULL || stream == NULL)
+ return;
+
+ link = dc_stream_get_link(stream);
+
+ if (!link)
+ return;
+
+ pDebug = (union replay_debug_flags *)&link->replay_settings.config.debug_flags;
+
+ if (0 == pDebug->bitfields.enable_visual_confirm_debug)
+ return;
+
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_UPDATE_ERROR_STATUS,
+ link->replay_settings.config.replay_error_status.raw, 0);
+}
+
+void mod_power_set_low_rr_activate(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool low_rr_supported)
+{
+ struct dc_link *link = NULL;
+
+ if (mod_power == NULL || stream == NULL)
+ return;
+
+ link = dc_stream_get_link(stream);
+
+ if (!link)
+ return;
+
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_SET_LOW_RR_ACTIVATE,
+ low_rr_supported, 0);
+}
+
+void mod_power_set_video_conferencing_activate(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool video_conferencing_activate)
+{
+ struct dc_link *link = NULL;
+
+ if (mod_power == NULL || stream == NULL)
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_VIDEO_CONFERENCING,
+ video_conferencing_activate, 0);
+}
+
+void mod_power_set_coasting_vtotal_without_frame_update(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, uint32_t coasting_vtotal)
+{
+ struct dc_link *link = NULL;
+
+ if (mod_power == NULL || stream == NULL)
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_SET_COASTING_VTOTAL_WITHOUT_FRAME_UPDATE,
+ coasting_vtotal, 0);
+}
+
+void mod_power_set_replay_continuously_resync(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool enable)
+{
+ struct dc_link *link = NULL;
+
+ if (mod_power == NULL || stream == NULL)
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_SET_CONTINUOUSLY_RESYNC,
+ enable, 0);
+}
+
+void mod_power_set_live_capture_with_cvt_activate(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, bool live_capture_with_cvt_activate)
+{
+ struct dc_link *link = NULL;
+
+ if (mod_power == NULL || stream == NULL)
+ return;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return;
+
+ // Check if LIVE_CAPTURE_WITH_CVT bit is enabled in DalRegKey_ReplayOptimization
+ if (!link->replay_settings.config.replay_optimization.bits.LIVE_CAPTURE_WITH_CVT)
+ return;
+
+ if (link->replay_settings.config.live_capture_with_cvt_activated != live_capture_with_cvt_activate) {
+ link->replay_settings.config.live_capture_with_cvt_activated = live_capture_with_cvt_activate;
+ mod_power_replay_set_general_cmd(mod_power, stream,
+ REPLAY_GENERAL_CMD_LIVE_CAPTURE_WITH_CVT,
+ live_capture_with_cvt_activate, 0);
+ }
+}
+
+bool mod_power_set_replay_event(struct mod_power *mod_power,
+ struct dc_stream_state *stream, bool set_event,
+ enum replay_event event, bool wait_for_disable)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int stream_index = 0;
+ unsigned int active_replay_events = 0;
+ bool replay_active_request = false;
+ bool force_static = false;
+ uint32_t coasting_vtotal = 0;
+ bool current_timing_sync_status = false;
+ bool is_full_screen_video = false;
+ bool is_ultra_sleep_mode = false;
+ unsigned int sink_duration_us = 0;
+ bool low_rr_active = false;
+ uint16_t frame_skip_number = 0;
+ bool is_video_playback = false;
+
+ if (!stream)
+ return false;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ stream_index = map_index_from_stream(core_power, stream);
+
+ if (set_event)
+ core_power->map[stream_index].replay_events |= event;
+ else
+ core_power->map[stream_index].replay_events &= ~event;
+
+ link = dc_stream_get_link(stream);
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return false;
+
+ if ((core_power->map[stream_index].replay_events & replay_event_disable_replay_while_switching_mux) != 0)
+ return false;
+
+ if ((core_power->map[stream_index].replay_events & replay_event_os_override_hold) != 0)
+ return false;
+
+ active_replay_events = core_power->map[stream_index].replay_events;
+
+ current_timing_sync_status =
+ link->replay_settings.config.replay_timing_sync_supported;
+
+ replay_active_request = mod_power_update_replay_active_status(active_replay_events,
+ link, &coasting_vtotal, &is_full_screen_video, &is_ultra_sleep_mode, &frame_skip_number, &is_video_playback);
+
+ if (is_full_screen_video)
+ mod_power_replay_set_pseudo_vtotal(mod_power, stream,
+ link->replay_settings.low_rr_full_screen_video_pseudo_vtotal);
+ else
+ mod_power_replay_set_pseudo_vtotal(mod_power, stream, 0);
+
+ //If timing_sync_status change, then re-enabled set timing_sync_supported value and re-enabled replay
+ if (current_timing_sync_status != link->replay_settings.config.replay_timing_sync_supported)
+ mod_power_replay_set_timing_sync_supported(mod_power, stream);
+
+ if (link->replay_settings.config.low_rr_supported) {
+ sink_duration_us =
+ (unsigned int)(div_u64(((unsigned long long)(coasting_vtotal)
+ * 10000) * stream->timing.h_total,
+ stream->timing.pix_clk_100hz));
+ low_rr_active = sink_duration_us < LOW_REFRESH_RATE_DURATION_US_UPPER_BOUND ? false : true;
+ if (low_rr_active != link->replay_settings.config.low_rr_activated) {
+ mod_power_set_low_rr_activate(mod_power, stream, low_rr_active);
+ link->replay_settings.config.low_rr_activated = low_rr_active;
+ }
+ }
+
+ // The function return fail when
+ // 1. DMUB function is not support (for backward compatible).
+ // 2. active_replay_events or coasting_vtotal is not updated in the same time
+ if (!mod_power_replay_set_power_opt_and_coasting_vtotal(mod_power,
+ stream, active_replay_events, coasting_vtotal, is_ultra_sleep_mode, frame_skip_number)) {
+ if (!mod_power_replay_set_power_opt(mod_power, stream, active_replay_events, is_ultra_sleep_mode))
+ return false;
+
+ if (!mod_power_replay_set_coasting_vtotal(mod_power, stream, coasting_vtotal, frame_skip_number))
+ return false;
+ }
+
+ mod_power_set_live_capture_with_cvt_activate(mod_power, stream, is_video_playback);
+
+ mod_power_update_error_status(mod_power, stream);
+
+ // If Replay is going to be enable (No matter is disable -> enable or enable -> enable), we don't need to wait.
+ // If Replay is going to be disable
+ // if disable -> disable
+ // -> Replay DMUB state should be state 0.
+ // So no matter wait_for_disable is true or not, it should makes no difference.
+ // if enable -> disable -> We should wait if wait_for_disable is true.
+ if (replay_active_request)
+ wait_for_disable = false;
+
+ if (!mod_power_set_replay_active(stream, replay_active_request, wait_for_disable, force_static))
+ return false;
+
+ return true;
+}
+
+bool mod_power_get_replay_active_status(const struct dc_stream_state *stream,
+ bool *replay_active)
+{
+ const struct dc_link *link = NULL;
+
+ if (!stream)
+ return false;
+
+ link = dc_stream_get_link(stream);
+ *replay_active = link->replay_settings.replay_allow_active;
+
+ return true;
+}
+
+void mod_power_replay_residency(const struct dc_stream_state *stream,
+ unsigned int *residency, const bool is_start, const bool is_alpm)
+{
+ const struct dc_link *link = NULL;
+ enum pr_residency_mode mode;
+
+ if (!stream)
+ return;
+
+ link = dc_stream_get_link(stream);
+
+ if (is_alpm)
+ mode = PR_RESIDENCY_MODE_ALPM;
+ else
+ mode = PR_RESIDENCY_MODE_PHY;
+
+ if (link && link->dc && link->dc->link_srv)
+ link->dc->link_srv->edp_replay_residency(link, residency, is_start, mode);
+}
+
+bool mod_power_replay_set_power_opt_and_coasting_vtotal(struct mod_power *mod_power,
+ const struct dc_stream_state *stream, unsigned int active_replay_events, uint32_t coasting_vtotal,
+ bool is_ultra_sleep_mode, uint16_t frame_skip_number)
+{
+ struct core_power *core_power = NULL;
+ struct dc_link *link = NULL;
+ unsigned int power_opt = 0;
+
+ if (!stream)
+ return false;
+
+ if (mod_power == NULL)
+ return false;
+
+ core_power = MOD_POWER_TO_CORE(mod_power);
+
+ if (core_power->num_entities == 0)
+ return false;
+
+ link = dc_stream_get_link(stream);
+
+ if (!link || !link->replay_settings.replay_feature_enabled)
+ return false;
+
+ power_opt = mod_power_replay_setup_power_opt(link, active_replay_events, is_ultra_sleep_mode);
+
+ return link->dc->link_srv->edp_set_replay_power_opt_and_coasting_vtotal(link, &power_opt, coasting_vtotal, frame_skip_number);
+}
+
+
+
+
+
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 09/19] drm/amd/display: Add power module on Linux
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (7 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 08/19] drm/amd/display: Introduce power module on Linux Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 10/19] drm/amd/display: Fix fpu guard warning Chenyu Chen
` (10 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Ray Wu, Chenyu Chen
From: Ray Wu <ray.wu@amd.com>
[Why & How]
Refactors dm to utilize the power module for managing
replay, PSR, and backlight control functionalities.
Key changes:
- Introduced replay / PSR events to enable / disable replay / PSR.
- Implemented replay rate control and power option
- Refactored backlight control by using the power module.
- Enhanced handling of VRR within replay and PSR logic.
Reviewed-by: Leo Li <sunpeng.li@amd.com>
Signed-off-by: Ray Wu <ray.wu@amd.com>
Signed-off-by: Leo Li <sunpeng.li@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 344 ++++++++++++++----
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 10 +
.../drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c | 36 +-
.../amd/display/amdgpu_dm/amdgpu_dm_crtc.c | 74 +---
.../amd/display/amdgpu_dm/amdgpu_dm_crtc.h | 5 +-
.../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 60 ++-
.../drm/amd/display/amdgpu_dm/amdgpu_dm_ism.c | 26 +-
.../drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 242 ++++--------
.../drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h | 13 +-
.../amd/display/amdgpu_dm/amdgpu_dm_replay.c | 143 ++++----
.../amd/display/amdgpu_dm/amdgpu_dm_replay.h | 28 +-
.../display/amdgpu_dm/amdgpu_dm_services.c | 30 +-
12 files changed, 566 insertions(+), 445 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index 09121152b980..5b5a6f66f8e5 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -104,6 +104,7 @@
#include "ivsrcid/dcn/irqsrcs_dcn_1_0.h"
#include "modules/inc/mod_freesync.h"
+#include "modules/inc/mod_power.h"
#include "modules/power/power_helpers.h"
static_assert(AMDGPU_DMUB_NOTIFICATION_MAX == DMUB_NOTIFICATION_MAX, "AMDGPU_DMUB_NOTIFICATION_MAX mismatch");
@@ -1878,6 +1879,70 @@ static enum dmub_ips_disable_type dm_get_default_ips_mode(
return ret;
}
+static int amdgpu_dm_init_power_module(struct amdgpu_display_manager *dm)
+{
+ struct mod_power_init_params init_data[MAX_NUM_EDP];
+
+ if (dm->num_of_edps == 0) {
+ drm_dbg_driver(
+ dm->ddev,
+ "amdgpu: No eDP detected, skip initializing power module\n");
+ return 0;
+ }
+
+ /* Initialize all the power module parameters */
+ for (int i = 0; i < dm->num_of_edps; i++) {
+ init_data[i].allow_psr_smu_optimizations =
+ !!(amdgpu_dc_feature_mask & DC_PSR_ALLOW_SMU_OPT);
+ init_data[i].allow_psr_multi_disp_optimizations =
+ !!(amdgpu_dc_feature_mask & DC_PSR_ALLOW_MULTI_DISP_OPT);
+ /* See dm_late_init */
+ init_data[i].backlight_ramping_override = false;
+ init_data[i].backlight_ramping_start = 0xCCCC;
+ init_data[i].backlight_ramping_reduction = 0xCCCCCCCC;
+ init_data[i].def_varibright_level = 0;
+ init_data[i].abm_config_setting = 0;
+ init_data[i].num_backlight_levels = 101;
+ init_data[i].use_nits_based_brightness = false;
+ init_data[i].panel_max_millinits = 0;
+ init_data[i].panel_min_millinits = 0;
+ init_data[i].disable_fractional_pwm =
+ !(amdgpu_dc_feature_mask & DC_DISABLE_FRACTIONAL_PWM_MASK);
+ init_data[i].use_custom_backlight_caps = false;
+ init_data[i].custom_backlight_caps_config_no = 0;
+ init_data[i].use_linear_backlight_curve = false;
+ init_data[i].def_varibright_enable = 0;
+ init_data[i].varibright_level = 0;
+ /*
+ * Power module uses 16-bit backlight levels (0xFFFF max) rather
+ * than 8-bit(0XFF max)
+ */
+ init_data[i].min_backlight_pwm =
+ dm->backlight_caps[i].min_input_signal * 0x101;
+ init_data[i].max_backlight_pwm =
+ dm->backlight_caps[i].max_input_signal * 0x101;
+ init_data[i].min_abm_backlight =
+ dm->backlight_caps[i].min_input_signal * 0x101;
+
+ /* Min backlight level after ABM reduction, Don't allow below 1%
+ * 0xFFFF x 0.01 = 0x28F
+ */
+ init_data[i].min_abm_backlight = (init_data[i].min_abm_backlight < 0x28F) ?
+ 0x28F : init_data[i].min_abm_backlight;
+ }
+
+ dm->power_module = mod_power_create(dm->dc, init_data, dm->num_of_edps);
+ if (!dm->power_module) {
+ drm_err(dm->ddev, "amdgpu: Error allocating memory for power module\n");
+ return -ENOMEM;
+ }
+
+ mod_power_hw_init(dm->power_module);
+ drm_dbg_driver(dm->ddev, "amdgpu: Power module init done\n");
+
+ return 0;
+}
+
static int amdgpu_dm_init(struct amdgpu_device *adev)
{
struct dc_init_data init_data;
@@ -1895,6 +1960,8 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
mutex_init(&adev->dm.dc_lock);
mutex_init(&adev->dm.audio_lock);
+ spin_lock_init(&adev->dm.dmub_lock);
+
if (amdgpu_dm_irq_init(adev)) {
drm_err(adev_to_drm(adev), "failed to initialize DM IRQ support.\n");
goto error;
@@ -2191,6 +2258,9 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
goto error;
}
+ if (amdgpu_dm_init_power_module(&adev->dm))
+ goto error;
+
/* create fake encoders for MST */
dm_dp_create_fake_mst_encoders(adev);
@@ -2332,6 +2402,10 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev)
adev->dm.freesync_module = NULL;
}
+ if (adev->dm.power_module) {
+ mod_power_destroy(adev->dm.power_module);
+ adev->dm.power_module = NULL;
+ }
mutex_destroy(&adev->dm.audio_lock);
mutex_destroy(&adev->dm.dc_lock);
mutex_destroy(&adev->dm.dpia_aux_lock);
@@ -5051,8 +5125,8 @@ static int amdgpu_dm_mode_config_init(struct amdgpu_device *adev)
#define AMDGPU_DM_MIN_SPREAD ((AMDGPU_DM_DEFAULT_MAX_BACKLIGHT - AMDGPU_DM_DEFAULT_MIN_BACKLIGHT) / 2)
#define AUX_BL_DEFAULT_TRANSITION_TIME_MS 50
-static void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
- int bl_idx)
+void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm,
+ int bl_idx)
{
struct amdgpu_dm_backlight_caps *caps = &dm->backlight_caps[bl_idx];
@@ -5214,15 +5288,34 @@ static u32 convert_brightness_to_user(const struct amdgpu_dm_backlight_caps *cap
max - min);
}
+static struct dc_stream_state *dm_find_stream_with_link(
+ struct amdgpu_display_manager *dm,
+ struct dc_link *link)
+{
+ struct dc_state *cur_dc_state = dm->dc->current_state;
+ struct dc_stream_state *stream = NULL;
+ int i;
+
+ for (i = 0; i < cur_dc_state->stream_count; i++) {
+ stream = cur_dc_state->streams[i];
+ if (stream->link == link)
+ return stream;
+ }
+
+ return NULL;
+}
+
static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
int bl_idx,
u32 user_brightness)
{
struct amdgpu_dm_backlight_caps *caps;
struct dc_link *link;
- u32 brightness;
- bool rc, reallow_idle = false;
+ u32 brightness = 0;
+ bool rc = false, reallow_idle = false;
struct drm_connector *connector;
+ struct dc_stream_state *stream;
+ unsigned int min, max;
list_for_each_entry(connector, &dm->ddev->mode_config.connector_list, head) {
struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector);
@@ -5252,13 +5345,6 @@ static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
if (caps->brightness_mask)
brightness |= caps->brightness_mask;
- /* Change brightness based on AUX property */
- mutex_lock(&dm->dc_lock);
- if (dm->dc->caps.ips_support && dm->dc->ctx->dmub_srv->idle_allowed) {
- dc_allow_idle_optimizations(dm->dc, false);
- reallow_idle = true;
- }
-
if (trace_amdgpu_dm_brightness_enabled()) {
trace_amdgpu_dm_brightness(__builtin_return_address(0),
user_brightness,
@@ -5267,22 +5353,45 @@ static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
power_supply_is_system_supplied() > 0);
}
- if (caps->aux_support) {
- rc = dc_link_set_backlight_level_nits(link, true, brightness,
- AUX_BL_DEFAULT_TRANSITION_TIME_MS);
- if (!rc)
- DRM_DEBUG("DM: Failed to update backlight via AUX on eDP[%d]\n", bl_idx);
- } else {
- struct set_backlight_level_params backlight_level_params = { 0 };
+ stream = dm_find_stream_with_link(dm, link);
+ if (!stream)
+ return;
- backlight_level_params.backlight_pwm_u16_16 = brightness;
- backlight_level_params.transition_time_in_ms = 0;
+ mutex_lock(&dm->dc_lock);
+ if (dm->dc->caps.ips_support && dm->dc->ctx->dmub_srv->idle_allowed) {
+ dc_allow_idle_optimizations(dm->dc, false);
+ reallow_idle = true;
+ }
- rc = dc_link_set_backlight_level(link, &backlight_level_params);
- if (!rc)
- DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx);
+ if (caps->aux_support) {
+ rc = mod_power_set_backlight_nits(dm->power_module, stream, brightness,
+ AUX_BL_DEFAULT_TRANSITION_TIME_MS, false, true);
+ } else {
+ /* power module uses millipercent */
+ get_brightness_range(caps, &min, &max);
+ brightness = DIV_ROUND_CLOSEST(brightness * 100, (max - min)) * 1000;
+ rc = mod_power_set_backlight_percent(dm->power_module, stream,
+ brightness, 0, false);
}
+ /*
+ * Some kms clients create a ramped backlight transition effect
+ * by rapidly changing the backlight. Yet we must wait on dmcub
+ * fw to exit psr/replay before programming backlight. To
+ * prevent lag, keep disable psr/replay and let the next atomic
+ * flip clear the event.
+ *
+ * ToDo: use ISM to handle rapidly backlight change
+ *
+ * Rapidly backlight change is similar to rapidly cursor events,
+ * which is now handled by ISM. ISM can delay the event until system
+ * is really idle, so we may use ISM to handle backlight change as well.
+ */
+ amdgpu_dm_psr_set_event(dm, stream, true,
+ psr_event_hw_programming, true);
+ amdgpu_dm_replay_set_event(dm, stream, true,
+ replay_event_hw_programming, true);
+
if (dm->dc->caps.ips_support && reallow_idle)
dc_allow_idle_optimizations(dm->dc, true);
@@ -5500,6 +5609,8 @@ static void setup_backlight_device(struct amdgpu_display_manager *dm,
static void amdgpu_set_panel_orientation(struct drm_connector *connector);
+
+
/*
* In this architecture, the association
* connector -> encoder -> crtc
@@ -5741,7 +5852,7 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev)
psr_feature_enabled = false;
if (psr_feature_enabled) {
- amdgpu_dm_set_psr_caps(link);
+ amdgpu_dm_set_psr_caps(link, aconnector);
drm_info(adev_to_drm(adev), "%s: PSR support %d, DC PSR ver %d, sink PSR ver %d DPCD caps 0x%x su_y_granularity %d\n",
aconnector->base.name,
link->psr_settings.psr_feature_enabled,
@@ -9793,7 +9904,8 @@ static void update_stream_irq_parameters(
spin_unlock_irqrestore(&adev_to_drm(adev)->event_lock, flags);
}
-static void amdgpu_dm_handle_vrr_transition(struct dm_crtc_state *old_state,
+static void amdgpu_dm_handle_vrr_transition(struct amdgpu_display_manager *dm,
+ struct dm_crtc_state *old_state,
struct dm_crtc_state *new_state)
{
bool old_vrr_active = amdgpu_dm_crtc_vrr_active(old_state);
@@ -9812,6 +9924,13 @@ static void amdgpu_dm_handle_vrr_transition(struct dm_crtc_state *old_state,
WARN_ON(drm_crtc_vblank_get(new_state->base.crtc) != 0);
drm_dbg_driver(new_state->base.crtc->dev, "%s: crtc=%u VRR off->on: Get vblank ref\n",
__func__, new_state->base.crtc->base.id);
+
+ scoped_guard(mutex, &dm->dc_lock) {
+ amdgpu_dm_psr_set_event(dm, new_state->stream, true,
+ psr_event_vrr_transition, true);
+ amdgpu_dm_replay_set_event(dm, new_state->stream, true,
+ replay_event_vrr, true);
+ }
} else if (old_vrr_active && !new_vrr_active) {
/* Transition VRR active -> inactive:
* Allow vblank irq disable again for fixed refresh rate.
@@ -9820,6 +9939,13 @@ static void amdgpu_dm_handle_vrr_transition(struct dm_crtc_state *old_state,
drm_crtc_vblank_put(new_state->base.crtc);
drm_dbg_driver(new_state->base.crtc->dev, "%s: crtc=%u VRR on->off: Drop vblank ref\n",
__func__, new_state->base.crtc->base.id);
+
+ scoped_guard(mutex, &dm->dc_lock) {
+ amdgpu_dm_psr_set_event(dm, new_state->stream, false,
+ psr_event_vrr_transition, false);
+ amdgpu_dm_replay_set_event(dm, new_state->stream, false,
+ replay_event_vrr, false);
+ }
}
}
@@ -9917,7 +10043,8 @@ static void amdgpu_dm_update_cursor(struct drm_plane *plane,
}
}
-static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
+static void amdgpu_dm_enable_self_refresh(struct amdgpu_display_manager *dm,
+ struct amdgpu_crtc *acrtc_attach,
const struct dm_crtc_state *acrtc_state,
const u64 current_ts)
{
@@ -9925,20 +10052,10 @@ static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;
struct amdgpu_dm_connector *aconn =
(struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context;
- bool vrr_active = amdgpu_dm_crtc_vrr_active(acrtc_state);
-
- if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
- if (pr->config.replay_supported && !pr->replay_feature_enabled)
- amdgpu_dm_link_setup_replay(acrtc_state->stream->link, aconn);
- else if (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED &&
- !psr->psr_feature_enabled)
- if (!aconn->disallow_edp_enter_psr)
- amdgpu_dm_link_setup_psr(acrtc_state->stream);
- }
/* Decrement skip count when SR is enabled and we're doing fast updates. */
if (acrtc_state->update_type == UPDATE_TYPE_FAST &&
- (psr->psr_feature_enabled || pr->config.replay_supported)) {
+ (psr->psr_feature_enabled || pr->replay_feature_enabled)) {
if (aconn->sr_skip_count > 0)
aconn->sr_skip_count--;
@@ -9953,17 +10070,15 @@ static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
* of update events.
* See `amdgpu_dm_crtc_vblank_control_worker()`.
*/
- if (!vrr_active &&
- acrtc_attach->dm_irq_params.allow_sr_entry &&
-#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
- !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
-#endif
- (current_ts - psr->psr_dirty_rects_change_timestamp_ns) > 500000000) {
- if (pr->replay_feature_enabled && !pr->replay_allow_active)
- amdgpu_dm_replay_enable(acrtc_state->stream, true);
- if (psr->psr_version == DC_PSR_VERSION_SU_1 &&
- !psr->psr_allow_active && !aconn->disallow_edp_enter_psr)
- amdgpu_dm_psr_enable(acrtc_state->stream);
+ if (acrtc_attach->dm_irq_params.allow_sr_entry &&
+ (current_ts - psr->psr_dirty_rects_change_timestamp_ns) > 500000000) {
+ amdgpu_dm_psr_set_event(dm, acrtc_state->stream, false,
+ psr_event_hw_programming, false);
+
+ amdgpu_dm_replay_set_event(dm, acrtc_state->stream, true,
+ replay_event_general_ui, true);
+ amdgpu_dm_replay_set_event(dm, acrtc_state->stream, false,
+ replay_event_hw_programming, false);
}
} else {
acrtc_attach->dm_irq_params.allow_sr_entry = false;
@@ -10125,15 +10240,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
*/
if (acrtc_state->stream->link->psr_settings.psr_version >= DC_PSR_VERSION_SU_1 &&
acrtc_attach->dm_irq_params.allow_sr_entry &&
-#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
- !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
-#endif
dirty_rects_changed) {
mutex_lock(&dm->dc_lock);
acrtc_state->stream->link->psr_settings.psr_dirty_rects_change_timestamp_ns =
timestamp_ns;
- if (acrtc_state->stream->link->psr_settings.psr_allow_active)
- amdgpu_dm_psr_disable(acrtc_state->stream, true);
+ amdgpu_dm_psr_set_event(dm, acrtc_state->stream, true,
+ psr_event_hw_programming, true);
mutex_unlock(&dm->dc_lock);
}
}
@@ -10298,15 +10410,6 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
if (acrtc_state->abm_level != dm_old_crtc_state->abm_level)
bundle->stream_update.abm_level = &acrtc_state->abm_level;
- mutex_lock(&dm->dc_lock);
- if ((acrtc_state->update_type > UPDATE_TYPE_FAST) || vrr_active) {
- if (acrtc_state->stream->link->replay_settings.replay_allow_active)
- amdgpu_dm_replay_disable(acrtc_state->stream);
- if (acrtc_state->stream->link->psr_settings.psr_allow_active)
- amdgpu_dm_psr_disable(acrtc_state->stream, true);
- }
- mutex_unlock(&dm->dc_lock);
-
/*
* If FreeSync state on the stream has changed then we need to
* re-adjust the min/max bounds now that DC doesn't handle this
@@ -10344,8 +10447,8 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
if (dm_old_crtc_state->active_planes != acrtc_state->active_planes)
dm_update_pflip_irq_state(drm_to_adev(dev),
acrtc_attach);
-
- amdgpu_dm_enable_self_refresh(acrtc_attach, acrtc_state, timestamp_ns);
+ amdgpu_dm_enable_self_refresh(dm, acrtc_attach, acrtc_state,
+ timestamp_ns);
mutex_unlock(&dm->dc_lock);
}
@@ -10464,6 +10567,102 @@ static void dm_clear_writeback(struct amdgpu_display_manager *dm,
dc_stream_remove_writeback(dm->dc, crtc_state->stream, 0);
}
+/**
+ * amdgpu_dm_mod_power_update_streams - update mod_power stream state on modeset
+ * @state: the drm atomic state
+ * @dm: the display manager to update mod_power on
+ *
+ * Notify mod_power of stream changes on modeset events, and disable PSR/Replay
+ * in preparation for hardware programming. See also
+ * amdgpu_dm_mod_power_setup_streams() for post-modeset mod_power setup.
+ */
+static void amdgpu_dm_mod_power_update_streams(struct drm_atomic_state *state,
+ struct amdgpu_display_manager *dm)
+{
+ struct dm_crtc_state *dm_old_crtc_state, *dm_new_crtc_state;
+ struct drm_crtc_state *old_crtc_state, *new_crtc_state;
+ struct amdgpu_dm_connector *aconnector;
+ struct drm_crtc *crtc;
+ int i = 0;
+
+ for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state, new_crtc_state, i) {
+ dm_old_crtc_state = to_dm_crtc_state(old_crtc_state);
+ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+
+ if (!drm_atomic_crtc_needs_modeset(new_crtc_state))
+ continue;
+
+ /*
+ * Update mod_power on modeset event in preparation for hw
+ * programming. Always use the old stream, since it would have
+ * been previously added to mod_power. If old stream is null (on
+ * crtc enable, for example), mod_power will no-op, which is the
+ * desried behavior.
+ */
+ if (old_crtc_state->active) {
+ scoped_guard(mutex, &dm->dc_lock) {
+ amdgpu_dm_psr_set_event(dm, dm_old_crtc_state->stream, true,
+ psr_event_hw_programming, true);
+ amdgpu_dm_replay_set_event(dm, dm_old_crtc_state->stream, true,
+ replay_event_hw_programming, true);
+ }
+ }
+
+ if (new_crtc_state->active) {
+ aconnector = (struct amdgpu_dm_connector *)
+ dm_new_crtc_state->stream->dm_stream_context;
+ if (old_crtc_state->active) {
+ mod_power_replace_stream(dm->power_module,
+ dm_old_crtc_state->stream,
+ dm_new_crtc_state->stream,
+ &aconnector->psr_caps);
+ } else {
+ mod_power_add_stream(dm->power_module,
+ dm_new_crtc_state->stream,
+ &aconnector->psr_caps);
+ }
+ } else if (old_crtc_state->active) {
+ mod_power_remove_stream(dm->power_module,
+ dm_old_crtc_state->stream);
+ }
+ }
+}
+
+/**
+ * amdgpu_dm_mod_power_setup_streams - setup mod_power stream state post modeset
+ * @state: the drm atomic state
+ * @dm: the display manager to update mod_power on
+ *
+ * Notify mod_power of mode_change. This needs to be done after dc_stream
+ * updates have been committed, and VRR parameters have been updated.
+ */
+static void amdgpu_dm_mod_power_setup_streams(struct drm_atomic_state *state,
+ struct amdgpu_display_manager *dm)
+{
+ struct dm_crtc_state *dm_new_crtc_state;
+ struct drm_crtc_state *new_crtc_state;
+ struct amdgpu_crtc *acrtc;
+ struct drm_crtc *crtc;
+ int i = 0;
+
+ for_each_new_crtc_in_state(state, crtc, new_crtc_state, i) {
+ dm_new_crtc_state = to_dm_crtc_state(new_crtc_state);
+ acrtc = to_amdgpu_crtc(crtc);
+
+ if (!drm_atomic_crtc_needs_modeset(new_crtc_state))
+ continue;
+
+ if (new_crtc_state->active) {
+ amdgpu_dm_link_setup_replay(dm_new_crtc_state->stream,
+ &acrtc->dm_irq_params.vrr_params);
+ mod_power_notify_mode_change(dm->power_module,
+ dm_new_crtc_state->stream,
+ false);
+ }
+ }
+
+}
+
static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
struct dc_state *dc_state)
{
@@ -10507,6 +10706,8 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
acrtc->wb_enabled = false;
}
+ amdgpu_dm_mod_power_update_streams(state, dm);
+
for_each_oldnew_crtc_in_state(state, crtc, old_crtc_state,
new_crtc_state, i) {
struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
@@ -10611,13 +10812,10 @@ static void amdgpu_dm_commit_streams(struct drm_atomic_state *state,
}
} /* for_each_crtc_in_state() */
- /* if there mode set or reset, disable eDP PSR, Replay */
+ /* if there mode set or reset, flush vblank work queue */
if (mode_set_reset_required) {
if (dm->vblank_control_workqueue)
flush_workqueue(dm->vblank_control_workqueue);
-
- amdgpu_dm_replay_disable_all(dm);
- amdgpu_dm_psr_disable_all(dm);
}
dm_enable_per_frame_crtc_master_sync(dc_state);
@@ -11090,7 +11288,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
manage_dm_interrupts(adev, acrtc, dm_new_crtc_state);
}
/* Handle vrr on->off / off->on transitions */
- amdgpu_dm_handle_vrr_transition(dm_old_crtc_state, dm_new_crtc_state);
+ amdgpu_dm_handle_vrr_transition(dm, dm_old_crtc_state, dm_new_crtc_state);
#ifdef CONFIG_DEBUG_FS
if (new_crtc_state->active &&
@@ -11128,6 +11326,8 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
#endif
}
+ amdgpu_dm_mod_power_setup_streams(state, dm);
+
for_each_new_crtc_in_state(state, crtc, new_crtc_state, j)
if (new_crtc_state->async_flip)
wait_for_vblank = false;
@@ -13686,11 +13886,17 @@ int amdgpu_dm_process_dmub_set_config_sync(
bool dm_execute_dmub_cmd(const struct dc_context *ctx, union dmub_rb_cmd *cmd, enum dm_dmub_wait_type wait_type)
{
+ struct amdgpu_device *adev = ctx->driver_context;
+
+ guard(spinlock_irqsave)(&adev->dm.dmub_lock);
return dc_dmub_srv_cmd_run(ctx->dmub_srv, cmd, wait_type);
}
bool dm_execute_dmub_cmd_list(const struct dc_context *ctx, unsigned int count, union dmub_rb_cmd *cmd, enum dm_dmub_wait_type wait_type)
{
+ struct amdgpu_device *adev = ctx->driver_context;
+
+ guard(spinlock_irqsave)(&adev->dm.dmub_lock);
return dc_dmub_srv_cmd_run_list(ctx->dmub_srv, count, cmd, wait_type);
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
index 74a8fe1a1999..1e0ccf58cdb8 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
@@ -463,6 +463,13 @@ struct amdgpu_display_manager {
*/
struct mutex dc_lock;
+ /**
+ * @dmub_lock:
+ *
+ * Guards access to DMUB command submission.
+ */
+ spinlock_t dmub_lock;
+
/**
* @audio_lock:
*
@@ -568,6 +575,7 @@ struct amdgpu_display_manager {
struct amdgpu_dm_backlight_caps backlight_caps[AMDGPU_DM_MAX_NUM_EDP];
struct mod_freesync *freesync_module;
+ struct mod_power *power_module;
struct hdcp_workqueue *hdcp_workqueue;
/**
@@ -835,6 +843,7 @@ struct amdgpu_dm_connector {
bool force_yuv420_output;
bool force_yuv422_output;
struct dsc_preferred_settings dsc_settings;
+ struct psr_caps psr_caps;
union dp_downstream_port_present mst_downstream_port_present;
/* Cached display modes */
struct drm_display_mode freesync_vid_base;
@@ -1149,4 +1158,5 @@ int amdgpu_dm_initialize_hdmi_connector(struct amdgpu_dm_connector *aconnector);
void retrieve_dmi_info(struct amdgpu_display_manager *dm);
+void amdgpu_dm_update_backlight_caps(struct amdgpu_display_manager *dm, int bl_idx);
#endif /* __AMDGPU_DM_H__ */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
index dd79866df1fd..2663593aa35c 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
@@ -503,7 +503,6 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
{
struct amdgpu_device *adev = drm_to_adev(crtc->dev);
struct dc_stream_state *stream_state = dm_crtc_state->stream;
- struct amdgpu_dm_connector *aconnector = NULL;
bool enable = amdgpu_dm_is_valid_crc_source(source);
int ret = 0;
enum crc_poly_mode crc_poly_mode = CRC_POLY_MODE_16;
@@ -512,21 +511,17 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
if (!stream_state)
return -EINVAL;
- /* Get connector from stream */
- aconnector = (struct amdgpu_dm_connector *)stream_state->dm_stream_context;
-
mutex_lock(&adev->dm.dc_lock);
-
+ /* Notify power module about CRC window active to disable PSR/Replay
+ * Power module will check caps internally and skip if not supported
+ */
if (enable) {
- /* For PSR1, check that the panel has exited PSR */
- if (stream_state->link->psr_settings.psr_version < DC_PSR_VERSION_SU_1)
- amdgpu_dm_psr_wait_disable(stream_state);
+ amdgpu_dm_psr_set_event(&adev->dm, stream_state, true,
+ psr_event_crc_window_active, true);
- /* Set flag to disallow enter replay when CRC source is enabled */
- if (aconnector)
- aconnector->disallow_edp_enter_replay = true;
- amdgpu_dm_replay_disable(stream_state);
+ amdgpu_dm_replay_set_event(&adev->dm, stream_state, true,
+ replay_event_crc_window_active, true);
}
/* CRC polynomial selection only support for DCN3.6+ except DCN4.0.1 */
@@ -559,11 +554,15 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
}
if (!enable) {
- /* Clear flag to allow enter replay when CRC source is disabled */
- if (aconnector)
- aconnector->disallow_edp_enter_replay = false;
- }
+ /* Notify power module about CRC window inactive to re-enable PSR/Replay
+ * Power module will check caps internally and skip if not supported
+ */
+ amdgpu_dm_psr_set_event(&adev->dm, stream_state, false,
+ psr_event_crc_window_active, false);
+ amdgpu_dm_replay_set_event(&adev->dm, stream_state, false,
+ replay_event_crc_window_active, false);
+ }
unlock:
mutex_unlock(&adev->dm.dc_lock);
@@ -760,10 +759,13 @@ void amdgpu_dm_crtc_handle_crc_irq(struct drm_crtc *crtc)
uint32_t crcs[3];
unsigned long flags;
- if (crtc == NULL)
+ if (!crtc || !crtc->state || !crtc->dev)
return;
crtc_state = to_dm_crtc_state(crtc->state);
+ if (!crtc_state->stream)
+ return;
+
stream_state = crtc_state->stream;
acrtc = to_amdgpu_crtc(crtc);
drm_dev = crtc->dev;
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
index 40c5f74dbe2b..efb19f675b0c 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
@@ -34,6 +34,7 @@
#include "amdgpu_dm_plane.h"
#include "amdgpu_dm_trace.h"
#include "amdgpu_dm_debugfs.h"
+#include "modules/inc/mod_power.h"
#define HPD_DETECTION_PERIOD_uS 2000000
#define HPD_DETECTION_TIME_uS 100000
@@ -100,68 +101,33 @@ bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state)
}
/**
- * amdgpu_dm_crtc_set_panel_sr_feature() - Manage panel self-refresh features.
- * @dm: amdgpu display manager instance.
- * @acrtc: CRTC whose panel self-refresh state is being updated.
- * @stream: DC stream associated with @acrtc.
- * @vblank_enabled: Whether the DRM vblank counter is currently enabled.
- * @allow_sr_entry: Whether entry into self-refresh mode is allowed.
+ * amdgpu_dm_crtc_set_static_screen_optimze() - Toggle static screen optimizations.
*
- * The DRM vblank counter enable/disable action is used as the trigger to enable
- * or disable various panel self-refresh features:
+ * @dm: display manager
+ * @stream: DC stream state
+ * @sso_enable: desired static screen optimization state
+ * @allow_sr_entry: whether entry into self-refresh mode is allowed
*
- * Panel Replay and PSR SU
- * - Enable when:
- * - VRR is disabled
- * - vblank counter is disabled
- * - entry is allowed: usermode demonstrates an adequate number of fast
- * commits
- * - CRC capture window isn't active
- * - Keep enabled even when vblank counter gets enabled
- *
- * PSR1
- * - Enable condition same as above
- * - Disable when vblank counter is enabled
+ * This function uses the static-screen optimization state as the trigger to
+ * set/clear the Replay and PSR vsync-related events.
*/
-void amdgpu_dm_crtc_set_panel_sr_feature(
+void amdgpu_dm_crtc_set_static_screen_optimze(
struct amdgpu_display_manager *dm,
- struct amdgpu_crtc *acrtc,
struct dc_stream_state *stream,
- bool vblank_enabled, bool allow_sr_entry)
+ bool sso_enable, bool allow_sr_entry)
{
struct dc_link *link = stream->link;
- bool is_sr_active = (link->replay_settings.replay_allow_active ||
- link->psr_settings.psr_allow_active);
- bool is_crc_window_active = false;
- bool vrr_active = amdgpu_dm_crtc_vrr_active_irq(acrtc);
-
-#ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
- is_crc_window_active =
- amdgpu_dm_crc_window_is_activated(&acrtc->base);
-#endif
+ bool set_vsync_event = !sso_enable;
- if (link->replay_settings.replay_feature_enabled && !vrr_active &&
- allow_sr_entry && !is_sr_active && !is_crc_window_active) {
- amdgpu_dm_replay_enable(stream, true);
- } else if (vblank_enabled) {
- if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active)
- amdgpu_dm_psr_disable(stream, false);
- } else if (link->psr_settings.psr_feature_enabled && !vrr_active &&
- allow_sr_entry && !is_sr_active && !is_crc_window_active) {
-
- struct amdgpu_dm_connector *aconn =
- (struct amdgpu_dm_connector *) stream->dm_stream_context;
-
- if (!aconn->disallow_edp_enter_psr) {
- amdgpu_dm_psr_enable(stream);
- if (dm->idle_workqueue &&
- (dm->dc->config.disable_ips == DMUB_IPS_ENABLE) &&
- dm->dc->idle_optimizations_allowed &&
- dm->idle_workqueue->enable &&
- !dm->idle_workqueue->running)
- schedule_work(&dm->idle_workqueue->work);
- }
- }
+ if (!allow_sr_entry)
+ return;
+
+ amdgpu_dm_replay_set_event(dm, stream,
+ set_vsync_event, replay_event_vsync, set_vsync_event);
+
+ if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1)
+ amdgpu_dm_psr_set_event(dm, stream,
+ set_vsync_event, psr_event_vsync, set_vsync_event);
}
bool amdgpu_dm_is_headless(struct amdgpu_device *adev)
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
index 3a8094013a5d..e9fb52f0e66d 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
@@ -27,11 +27,10 @@
#ifndef __AMDGPU_DM_CRTC_H__
#define __AMDGPU_DM_CRTC_H__
-void amdgpu_dm_crtc_set_panel_sr_feature(
+void amdgpu_dm_crtc_set_static_screen_optimze(
struct amdgpu_display_manager *dm,
- struct amdgpu_crtc *acrtc,
struct dc_stream_state *stream,
- bool vblank_enabled, bool allow_sr_entry);
+ bool sso_enable, bool allow_sr_entry);
void amdgpu_dm_crtc_handle_vblank(struct amdgpu_crtc *acrtc);
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
index 7c6deb2764aa..49226d6d0311 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
@@ -33,6 +33,7 @@
#include "amdgpu_dm.h"
#include "amdgpu_dm_debugfs.h"
#include "amdgpu_dm_replay.h"
+#include "amdgpu_dm_psr.h"
#include "dm_helpers.h"
#include "dmub/dmub_srv.h"
#include "resource.h"
@@ -3300,11 +3301,26 @@ static int disallow_edp_enter_psr_get(void *data, u64 *val)
static int disallow_edp_enter_psr_set(void *data, u64 val)
{
struct amdgpu_dm_connector *aconnector = data;
+ struct dc_link *link = aconnector->dc_link;
+
+ aconnector->disallow_edp_enter_psr = (val != 0);
- aconnector->disallow_edp_enter_psr = val ? true : false;
+ /* eDP PSR enable / disable is happened during mode change in power module.
+ * Only psr_settings.psr_version is used to decide whether PSR is enabled or not.
+ * So here we only update psr_version based on debugfs setting.
+ * If disallow_edp_enter_psr is true, set psr_version to unsupported;
+ * if disallow_edp_enter_psr is false, set psr_version based on sink capability.
+ */
+ if (aconnector->disallow_edp_enter_psr)
+ link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
+ else if (aconnector->psr_caps.psr_version == 1)
+ link->psr_settings.psr_version = DC_PSR_VERSION_1;
+ else if (aconnector->psr_caps.psr_version == 2)
+ link->psr_settings.psr_version = DC_PSR_VERSION_SU_1;
return 0;
}
+
/* check if kernel disallow eDP enter replay state
* cat /sys/kernel/debug/dri/0/eDP-X/disallow_edp_enter_replay
* 0: allow edp enter replay; 1: disallow
@@ -3346,11 +3362,27 @@ static int disallow_edp_enter_replay_get(void *data, u64 *val)
static int disallow_edp_enter_replay_set(void *data, u64 val)
{
struct amdgpu_dm_connector *aconnector = data;
+ struct dc_link *link = aconnector->dc_link;
+
+ aconnector->disallow_edp_enter_replay = (val != 0);
- aconnector->disallow_edp_enter_replay = val ? true : false;
+ /* eDP replay enable / disable is happened during mode change in power module.
+ * Only replay_settings.config.replay_supported is used to decide whether
+ * replay is enabled or not. So here we only update replay_supported based on
+ * debugfs setting.
+ * If disallow_edp_enter_replay is true, set replay_supported to false.
+ * if disallow_edp_enter_replay is false, set replay_supported back based on
+ * sink replay capability.
+ */
+ if (aconnector->disallow_edp_enter_replay)
+ link->replay_settings.config.replay_supported = false;
+ else
+ link->replay_settings.config.replay_supported =
+ link->replay_settings.config.replay_cap_support;
return 0;
}
+
static int dmub_trace_mask_set(void *data, u64 val)
{
struct amdgpu_device *adev = data;
@@ -3485,6 +3517,7 @@ DEFINE_DEBUGFS_ATTRIBUTE(disallow_edp_enter_replay_fops,
DEFINE_DEBUGFS_ATTRIBUTE(ips_residency_cntl_fops, ips_residency_cntl_get,
ips_residency_cntl_set, "%llu\n");
+
DEFINE_SHOW_ATTRIBUTE(current_backlight);
DEFINE_SHOW_ATTRIBUTE(target_backlight);
DEFINE_SHOW_ATTRIBUTE(ips_status);
@@ -3855,28 +3888,35 @@ DEFINE_DEBUGFS_ATTRIBUTE(crc_win_y_end_fops, crc_win_y_end_get,
static int crc_win_update_set(void *data, u64 val)
{
struct drm_crtc *crtc = data;
- struct amdgpu_crtc *acrtc;
+ struct amdgpu_crtc *acrtc = to_amdgpu_crtc(crtc);
struct amdgpu_device *adev = drm_to_adev(crtc->dev);
if (val) {
- acrtc = to_amdgpu_crtc(crtc);
mutex_lock(&adev->dm.dc_lock);
- /* PSR may write to OTG CRC window control register,
- * so close it before starting secure_display.
+ /* PSR Replay may write to OTG CRC window control register,
+ * so inactive it before starting secure_display by sending disable event.
*/
- amdgpu_dm_psr_disable(acrtc->dm_irq_params.stream, true);
+ amdgpu_dm_psr_set_event(&adev->dm, acrtc->dm_irq_params.stream, true,
+ psr_event_crc_window_active, true);
+ amdgpu_dm_replay_set_event(&adev->dm, acrtc->dm_irq_params.stream, true,
+ replay_event_crc_window_active, true);
spin_lock_irq(&adev_to_drm(adev)->event_lock);
-
acrtc->dm_irq_params.window_param[0].enable = true;
acrtc->dm_irq_params.window_param[0].update_win = true;
acrtc->dm_irq_params.window_param[0].skip_frame_cnt = 0;
acrtc->dm_irq_params.crc_window_activated = true;
-
spin_unlock_irq(&adev_to_drm(adev)->event_lock);
mutex_unlock(&adev->dm.dc_lock);
+ } else {
+ /* Clear disable events to allow PSR/Replay to active */
+ mutex_lock(&adev->dm.dc_lock);
+ amdgpu_dm_psr_set_event(&adev->dm, acrtc->dm_irq_params.stream, false,
+ psr_event_crc_window_active, false);
+ amdgpu_dm_replay_set_event(&adev->dm, acrtc->dm_irq_params.stream, false,
+ replay_event_crc_window_active, false);
+ mutex_unlock(&adev->dm.dc_lock);
}
-
return 0;
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_ism.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_ism.c
index a3ccb6fdc372..f2f6c7936e58 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_ism.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_ism.c
@@ -292,24 +292,16 @@ static void dm_ism_commit_idle_optimization_state(struct amdgpu_dm_ism *ism,
*/
if (stream && stream->link) {
/*
- * If allow_panel_sso is true when disabling vblank, allow
- * deeper panel sleep states such as PSR1 and Replay static
- * screen optimization.
- */
- if (!vblank_enabled && allow_panel_sso) {
- amdgpu_dm_crtc_set_panel_sr_feature(
- dm, acrtc, stream, false,
- acrtc->dm_irq_params.allow_sr_entry);
- } else if (vblank_enabled) {
- /* Make sure to exit SSO on vblank enable */
- amdgpu_dm_crtc_set_panel_sr_feature(
- dm, acrtc, stream, true,
- acrtc->dm_irq_params.allow_sr_entry);
- }
- /*
- * Else, vblank_enabled == false and allow_panel_sso == false;
- * do nothing here.
+ * If the OS requires vblank events (or vblank is otherwise enabled),
+ * do not allow static screen optimizations.
+ *
+ * Keep ism->allow_static_screen_optimizations unchanged so the
+ * hysteresis-based decision can be reused once vblank is disabled.
*/
+ allow_panel_sso = allow_panel_sso && !vblank_enabled;
+ amdgpu_dm_crtc_set_static_screen_optimze(
+ dm, stream, allow_panel_sso,
+ acrtc->dm_irq_params.allow_sr_entry);
}
/*
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
index 99d6d6c93561..dc5913a6456e 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
@@ -58,171 +58,76 @@ static bool link_supports_psrsu(struct dc_link *link)
return false;
}
-/*
- * amdgpu_dm_set_psr_caps() - set link psr capabilities
- * @link: link
- *
- */
-void amdgpu_dm_set_psr_caps(struct dc_link *link)
+static void amdgpu_dm_psr_fill_caps(struct dc_link *link, struct psr_caps *caps)
{
- if (!(link->connector_signal & SIGNAL_TYPE_EDP)) {
- link->psr_settings.psr_feature_enabled = false;
- return;
- }
-
- if (link->type == dc_connection_none) {
- link->psr_settings.psr_feature_enabled = false;
- return;
- }
-
- if (link->dpcd_caps.psr_info.psr_version == 0) {
- link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
- link->psr_settings.psr_feature_enabled = false;
-
- } else {
- unsigned int panel_inst = 0;
-
- if (link_supports_psrsu(link))
- link->psr_settings.psr_version = DC_PSR_VERSION_SU_1;
- else
- link->psr_settings.psr_version = DC_PSR_VERSION_1;
-
- link->psr_settings.psr_feature_enabled = true;
-
- /*disable allow psr/psrsu/replay on eDP1*/
- if (dc_get_edp_link_panel_inst(link->ctx->dc, link, &panel_inst) && panel_inst == 1) {
- link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
- link->psr_settings.psr_feature_enabled = false;
- }
- }
+ struct dpcd_caps *dpcd_caps = &link->dpcd_caps;
+ unsigned int power_opts = 0;
+
+ if (amdgpu_dc_feature_mask & DC_PSR_ALLOW_SMU_OPT)
+ power_opts |= psr_power_opt_smu_opt_static_screen;
+ power_opts |= psr_power_opt_z10_static_screen;
+
+ if (link->psr_settings.psr_version == DC_PSR_VERSION_1)
+ caps->psr_version = 1;
+ else if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
+ caps->psr_version = 2;
+
+ caps->psr_rfb_setup_time = (6 - dpcd_caps->psr_info.psr_dpcd_caps.bits.PSR_SETUP_TIME) * 55;
+ caps->psr_exit_link_training_required =
+ !dpcd_caps->psr_info.psr_dpcd_caps.bits.LINK_TRAINING_ON_EXIT_NOT_REQUIRED;
+ caps->edp_revision = dpcd_caps->edp_rev;
+ caps->support_ver = dpcd_caps->psr_info.psr_version;
+ caps->su_granularity_required =
+ dpcd_caps->psr_info.psr_dpcd_caps.bits.SU_GRANULARITY_REQUIRED;
+ caps->y_coordinate_required = dpcd_caps->psr_info.psr_dpcd_caps.bits.Y_COORDINATE_REQUIRED;
+ caps->su_y_granularity = dpcd_caps->psr_info.psr2_su_y_granularity_cap;
+ caps->alpm_cap = dpcd_caps->alpm_caps.bits.AUX_WAKE_ALPM_CAP;
+ caps->standby_support = dpcd_caps->alpm_caps.bits.PM_STATE_2A_SUPPORT;
+ caps->rate_control_caps = 0; /* TODO: read in rc caps from aux */
+ caps->psr_power_opt_flag = power_opts;
}
/*
- * amdgpu_dm_link_setup_psr() - configure psr link
- * @stream: stream state
- *
- * Return: true if success
+ * amdgpu_dm_set_psr_caps() - set link psr capabilities
+ * @link: link
+ * @aconnector: amdgpu_dm_connector
*/
-bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream)
+bool amdgpu_dm_set_psr_caps(struct dc_link *link, struct amdgpu_dm_connector *aconnector)
{
- struct dc_link *link = NULL;
- struct psr_config psr_config = {0};
- struct psr_context psr_context = {0};
- struct dc *dc = NULL;
- bool ret = false;
+ struct dc *dc;
+ unsigned int panel_inst = 0;
- if (stream == NULL)
+ if (!link || !aconnector)
return false;
- link = stream->link;
dc = link->ctx->dc;
- if (link->psr_settings.psr_version != DC_PSR_VERSION_UNSUPPORTED) {
- mod_power_calc_psr_configs(&psr_config, link, stream);
-
- /* linux DM specific updating for psr config fields */
- psr_config.allow_smu_optimizations =
- (amdgpu_dc_feature_mask & DC_PSR_ALLOW_SMU_OPT) &&
- mod_power_only_edp(dc->current_state, stream);
- psr_config.allow_multi_disp_optimizations =
- (amdgpu_dc_feature_mask & DC_PSR_ALLOW_MULTI_DISP_OPT);
-
- if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1) {
- if (!psr_su_set_dsc_slice_height(dc, link, stream, &psr_config))
- return false;
- }
-
- ret = dc_link_setup_psr(link, stream, &psr_config, &psr_context);
-
- }
- DRM_DEBUG_DRIVER("PSR link: %d\n", link->psr_settings.psr_feature_enabled);
-
- return ret;
-}
-
-/*
- * amdgpu_dm_psr_enable() - enable psr f/w
- * @stream: stream state
- *
- */
-void amdgpu_dm_psr_enable(struct dc_stream_state *stream)
-{
- struct dc_link *link = stream->link;
- unsigned int vsync_rate_hz = 0;
- struct dc_static_screen_params params = {0};
- /* Calculate number of static frames before generating interrupt to
- * enter PSR.
- */
- // Init fail safe of 2 frames static
- unsigned int num_frames_static = 2;
- unsigned int power_opt = 0;
- bool psr_enable = true;
-
- DRM_DEBUG_DRIVER("Enabling psr...\n");
-
- vsync_rate_hz = div64_u64(div64_u64((
- stream->timing.pix_clk_100hz * (uint64_t)100),
- stream->timing.v_total),
- stream->timing.h_total);
-
- /* Round up
- * Calculate number of frames such that at least 30 ms of time has
- * passed.
- */
- if (vsync_rate_hz != 0) {
- unsigned int frame_time_microsec = 1000000 / vsync_rate_hz;
-
- num_frames_static = (30000 / frame_time_microsec) + 1;
- }
-
- params.triggers.cursor_update = true;
- params.triggers.overlay_update = true;
- params.triggers.surface_update = true;
- params.num_frames = num_frames_static;
+ /* Reset psr version first */
+ link->psr_settings.psr_version = DC_PSR_VERSION_UNSUPPORTED;
- dc_stream_set_static_screen_params(link->ctx->dc,
- &stream, 1,
- ¶ms);
+ if (!dc->caps.dmub_caps.psr)
+ return false;
- /*
- * Only enable static-screen optimizations for PSR1. For PSR SU, this
- * causes vstartup interrupt issues, used by amdgpu_dm to send vblank
- * events.
- */
- if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1)
- power_opt |= psr_power_opt_z10_static_screen;
+ if (!(link->connector_signal & SIGNAL_TYPE_EDP))
+ return false;
- dc_link_set_psr_allow_active(link, &psr_enable, false, false, &power_opt);
+ if (link->type == dc_connection_none)
+ return false;
- if (link->ctx->dc->caps.ips_support)
- dc_allow_idle_optimizations(link->ctx->dc, true);
-}
+ if (link->dpcd_caps.psr_info.psr_version == 0)
+ return false;
-/*
- * amdgpu_dm_psr_disable() - disable psr f/w
- * @stream: stream state
- *
- * Return: true if success
- */
-bool amdgpu_dm_psr_disable(struct dc_stream_state *stream, bool wait)
-{
- bool psr_enable = false;
+ /*disable allow psr/psrsu/replay on eDP1*/
+ if (dc_get_edp_link_panel_inst(link->ctx->dc, link, &panel_inst) && panel_inst == 1)
+ return false;
- DRM_DEBUG_DRIVER("Disabling psr...\n");
+ if (link_supports_psrsu(link))
+ link->psr_settings.psr_version = DC_PSR_VERSION_SU_1;
+ else
+ link->psr_settings.psr_version = DC_PSR_VERSION_1;
- return dc_link_set_psr_allow_active(stream->link, &psr_enable, wait, false, NULL);
-}
-
-/*
- * amdgpu_dm_psr_disable_all() - disable psr f/w for all streams
- * if psr is enabled on any stream
- *
- * Return: true if success
- */
-bool amdgpu_dm_psr_disable_all(struct amdgpu_display_manager *dm)
-{
- DRM_DEBUG_DRIVER("Disabling psr if psr is enabled on any stream\n");
- return dc_set_psr_allow_active(dm->dc, false);
+ amdgpu_dm_psr_fill_caps(link, &aconnector->psr_caps);
+ return true;
}
/*
@@ -250,36 +155,37 @@ bool amdgpu_dm_psr_is_active_allowed(struct amdgpu_display_manager *dm)
break;
}
}
-
return allow_active;
}
-/**
- * amdgpu_dm_psr_wait_disable() - Wait for eDP panel to exit PSR
- * @stream: stream state attached to the eDP link
- *
- * Waits for a max of 500ms for the eDP panel to exit PSR.
+/*
+ * amdgpu_dm_psr_set_event() - set or clear PSR event for stream
+ * @dm: pointer to amdgpu_display_manager
+ * @stream: pointer to dc_stream_state
+ * @set_event: true to set event, false to clear event
+ * @event: PSR event type
+ * @wait_for_disable: whether to wait for PSR to be disabled
*
- * Return: true if panel exited PSR, false otherwise.
+ * Return: true if successful, false otherwise
*/
-bool amdgpu_dm_psr_wait_disable(struct dc_stream_state *stream)
+bool amdgpu_dm_psr_set_event(struct amdgpu_display_manager *dm, struct dc_stream_state *stream,
+ bool set_event, enum psr_event event, bool wait_for_disable)
{
- enum dc_psr_state psr_state = PSR_STATE0;
- struct dc_link *link = stream->link;
- int retry_count;
+ unsigned int psr_events;
- if (link == NULL)
+ /* Validate all required parameters */
+ if (!stream || !stream->link ||
+ !stream->link->psr_settings.psr_feature_enabled)
return false;
- for (retry_count = 0; retry_count <= 1000; retry_count++) {
- dc_link_get_psr_state(link, &psr_state);
- if (psr_state == PSR_STATE0)
- break;
- udelay(500);
- }
-
- if (retry_count == 1000)
+ /* Get current psr events */
+ if (!mod_power_get_psr_event(dm->power_module, stream, &psr_events))
return false;
- return true;
+ /* If all events already in desired state, return true. */
+ if ((psr_events & event) == (set_event ? event : 0))
+ return true;
+
+ return mod_power_set_psr_event(dm->power_module, stream,
+ set_event, event, wait_for_disable);
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
index 4fb8626913cf..16d535806ad6 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
@@ -28,16 +28,15 @@
#define AMDGPU_DM_AMDGPU_DM_PSR_H_
#include "amdgpu.h"
+#include "dc.h"
+#include "modules/inc/mod_power.h"
/* the number of pageflips before enabling psr */
#define AMDGPU_DM_PSR_ENTRY_DELAY 5
-void amdgpu_dm_set_psr_caps(struct dc_link *link);
-void amdgpu_dm_psr_enable(struct dc_stream_state *stream);
-bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream);
-bool amdgpu_dm_psr_disable(struct dc_stream_state *stream, bool wait);
-bool amdgpu_dm_psr_disable_all(struct amdgpu_display_manager *dm);
+bool amdgpu_dm_set_psr_caps(struct dc_link *link, struct amdgpu_dm_connector *aconnector);
bool amdgpu_dm_psr_is_active_allowed(struct amdgpu_display_manager *dm);
-bool amdgpu_dm_psr_wait_disable(struct dc_stream_state *stream);
-
+bool amdgpu_dm_psr_set_event(struct amdgpu_display_manager *dm,
+ struct dc_stream_state *stream, bool set_event, enum psr_event event,
+ bool wait_for_disable);
#endif /* AMDGPU_DM_AMDGPU_DM_PSR_H_ */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c
index 8c150b001105..297125d1db70 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.c
@@ -27,7 +27,6 @@
#include "amdgpu_dm_replay.h"
#include "dc_dmub_srv.h"
#include "dc.h"
-#include "dm_helpers.h"
#include "amdgpu_dm.h"
#include "modules/power/power_helpers.h"
#include "dmub/inc/dmub_cmd.h"
@@ -99,13 +98,29 @@ bool amdgpu_dm_set_replay_caps(struct dc_link *link, struct amdgpu_dm_connector
!dc->ctx->dmub_srv->dmub->feature_caps.replay_supported)
return false;
+ /* Mark Replay is supported in link and update related attributes
+ * This flag presents DPCD caps & amd_vsdb caps satisfy replay requirement.
+ */
+ pr_config.replay_cap_support = true;
+
// Mark Replay is supported in pr_config
pr_config.replay_supported = true;
+ pr_config.replay_enable_option = pr_enable_option_general_ui |
+ pr_enable_option_static_screen |
+ pr_enable_option_static_screen_coasting;
+ pr_config.replay_power_opt_supported = replay_power_opt_smu_opt_static_screen |
+ replay_power_opt_z10_static_screen;
+ pr_config.replay_smu_opt_supported = false;
+ pr_config.replay_support_fast_resync_in_ultra_sleep_mode =
+ aconnector->max_vfreq >= 2 * aconnector->min_vfreq;
+ pr_config.force_disable_desync_error_check = false;
+
debug_flags = (union replay_debug_flags *)&pr_config.debug_flags;
debug_flags->u32All = 0;
debug_flags->bitfields.visual_confirm =
link->ctx->dc->debug.visual_confirm == VISUAL_CONFIRM_REPLAY;
+ debug_flags->bitfields.skip_crtc_disabled = dc->debug.replay_skip_crtc_disabled;
init_replay_config(link, &pr_config);
@@ -113,104 +128,80 @@ bool amdgpu_dm_set_replay_caps(struct dc_link *link, struct amdgpu_dm_connector
}
/*
- * amdgpu_dm_link_setup_replay() - configure replay link
- * @link: link
- * @aconnector: aconnector
+ * amdgpu_dm_link_setup_replay() - config replay settings
+ * @stream: pointer to dc_stream_state structure
+ * @vrr_params: pointer to mod_vrr_params structure containing VRR parameters
*
+ * config replay link settings including coasting vtotal calculations.
+ *
+ * Return: true if successful, false if any parameter is invalid or replay not supported
*/
-bool amdgpu_dm_link_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector)
+bool amdgpu_dm_link_setup_replay(struct dc_stream_state *stream,
+ struct mod_vrr_params *vrr_params)
{
- struct replay_config *pr_config;
+ struct dc_link *link;
+ unsigned int static_coasting_vtotal;
+ unsigned int nom_coasting_vtotal;
- if (link == NULL || aconnector == NULL)
+ if (!stream || !stream->link || !vrr_params)
return false;
- pr_config = &link->replay_settings.config;
-
- if (!pr_config->replay_supported)
+ link = stream->link;
+ if (!link->replay_settings.config.replay_supported)
return false;
- pr_config->replay_power_opt_supported = 0x11;
- pr_config->replay_smu_opt_supported = false;
- pr_config->replay_enable_option |= pr_enable_option_static_screen;
- pr_config->replay_support_fast_resync_in_ultra_sleep_mode = aconnector->max_vfreq >= 2 * aconnector->min_vfreq;
- pr_config->replay_timing_sync_supported = false;
+ if (link->replay_settings.replay_feature_enabled)
+ return true;
- if (!pr_config->replay_timing_sync_supported)
- pr_config->replay_enable_option &= ~pr_enable_option_general_ui;
+ calculate_replay_link_off_frame_count(link, stream->timing.v_total,
+ stream->timing.h_total);
- link->replay_settings.replay_feature_enabled = true;
+ nom_coasting_vtotal = stream->timing.v_total;
+ static_coasting_vtotal = mod_freesync_calc_v_total_from_refresh(stream,
+ vrr_params->min_refresh_in_uhz);
+ set_replay_coasting_vtotal(link, PR_COASTING_TYPE_NOM,
+ nom_coasting_vtotal);
+ set_replay_coasting_vtotal(link, PR_COASTING_TYPE_STATIC,
+ static_coasting_vtotal);
return true;
}
/*
- * amdgpu_dm_replay_enable() - enable replay f/w
- * @stream: stream state
+ * amdgpu_dm_replay_set_event() - set or clear replay event for a stream
+ * @dm: pointer to amdgpu_display_manager
+ * @stream: pointer to dc_stream_state
+ * @set_event: true to set event, false to clear event
+ * @event: replay event type to set or clear
+ * @wait_for_disable: whether to wait for replay to be disabled before returning
*
- * Return: true if success
- */
-bool amdgpu_dm_replay_enable(struct dc_stream_state *stream, bool wait)
-{
- bool replay_active = true;
- struct dc_link *link = NULL;
- struct amdgpu_dm_connector *aconnector = NULL;
-
- if (stream == NULL)
- return false;
-
- /* Check if replay is disabled by connector flag */
- aconnector = (struct amdgpu_dm_connector *)stream->dm_stream_context;
- if (!aconnector || aconnector->disallow_edp_enter_replay) {
- return false;
- }
-
- link = stream->link;
-
- if (link) {
- link->dc->link_srv->dp_setup_replay(link, stream);
- link->dc->link_srv->edp_set_coasting_vtotal(link, stream->timing.v_total, 0);
- DRM_DEBUG_DRIVER("Enabling replay...\n");
- link->dc->link_srv->edp_set_replay_allow_active(link, &replay_active, wait, false, NULL);
- return true;
- }
-
- return false;
-}
-
-/*
- * amdgpu_dm_replay_disable() - disable replay f/w
- * @stream: stream state
+ * This function sets or clears a specific replay event for the given stream.
+ * It temporarily disables idle optimizations during the operation to ensure
+ * hardware access is available.
*
- * Return: true if success
+ * Return: true if successful, false if any parameter is invalid or operation fails
*/
-bool amdgpu_dm_replay_disable(struct dc_stream_state *stream)
+bool amdgpu_dm_replay_set_event(struct amdgpu_display_manager *dm,
+ struct dc_stream_state *stream,
+ bool set_event,
+ enum replay_event event,
+ bool wait_for_disable)
{
- bool replay_active = false;
- struct dc_link *link = NULL;
+ unsigned int replay_events;
- if (stream == NULL)
+ /* Validate all required parameters */
+ if (!stream || !stream->link ||
+ !stream->link->replay_settings.replay_feature_enabled)
return false;
- link = stream->link;
+ /* Get current replay events */
+ if (!mod_power_get_replay_event(dm->power_module, stream, &replay_events))
+ return false;
- if (link) {
- DRM_DEBUG_DRIVER("Disabling replay...\n");
- link->dc->link_srv->edp_set_replay_allow_active(stream->link, &replay_active, true, false, NULL);
+ /* If all events already in desired state, return true. */
+ if ((replay_events & event) == (set_event ? event : 0))
return true;
- }
-
- return false;
-}
-/*
- * amdgpu_dm_replay_disable_all() - disable replay f/w
- * if replay is enabled on any stream
- *
- * Return: true if success
- */
-bool amdgpu_dm_replay_disable_all(struct amdgpu_display_manager *dm)
-{
- DRM_DEBUG_DRIVER("Disabling replay if replay is enabled on any stream\n");
- return dc_set_replay_allow_active(dm->dc, false);
+ return mod_power_set_replay_event(dm->power_module, stream,
+ set_event, event, wait_for_disable);
}
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h
index 73b6c67ae5e7..021bf0255516 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_replay.h
@@ -28,22 +28,16 @@
#define AMDGPU_DM_AMDGPU_DM_REPLAY_H_
#include "amdgpu.h"
+#include "dc.h"
+#include "modules/inc/mod_power.h"
-enum replay_enable_option {
- pr_enable_option_static_screen = 0x1,
- pr_enable_option_mpo_video = 0x2,
- pr_enable_option_full_screen_video = 0x4,
- pr_enable_option_general_ui = 0x8,
- pr_enable_option_static_screen_coasting = 0x10000,
- pr_enable_option_mpo_video_coasting = 0x20000,
- pr_enable_option_full_screen_video_coasting = 0x40000,
-};
-
-bool amdgpu_dm_link_supports_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector);
-bool amdgpu_dm_replay_enable(struct dc_stream_state *stream, bool enable);
-bool amdgpu_dm_set_replay_caps(struct dc_link *link, struct amdgpu_dm_connector *aconnector);
-bool amdgpu_dm_link_setup_replay(struct dc_link *link, struct amdgpu_dm_connector *aconnector);
-bool amdgpu_dm_replay_disable(struct dc_stream_state *stream);
-bool amdgpu_dm_replay_disable_all(struct amdgpu_display_manager *dm);
-
+bool amdgpu_dm_link_supports_replay(struct dc_link *link,
+ struct amdgpu_dm_connector *aconnector);
+bool amdgpu_dm_set_replay_caps(struct dc_link *link,
+ struct amdgpu_dm_connector *aconnector);
+bool amdgpu_dm_link_setup_replay(struct dc_stream_state *stream,
+ struct mod_vrr_params *vrr_params);
+bool amdgpu_dm_replay_set_event(struct amdgpu_display_manager *dm,
+ struct dc_stream_state *stream, bool set_event,
+ enum replay_event event, bool wait_for_disable);
#endif /* AMDGPU_DM_AMDGPU_DM_REPLAY_H_ */
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
index 0ef7435ffda9..84dcb573d98f 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_services.c
@@ -64,12 +64,28 @@ void dm_trace_smu_exit(bool success, uint32_t response, struct dc_context *ctx)
/**** power component interfaces ****/
bool dm_query_extended_brightness_caps(struct dc_context *ctx,
- enum dm_acpi_display_type display,
- struct dm_acpi_atif_backlight_caps *pCaps)
+ enum dm_acpi_display_type display, struct dm_acpi_atif_backlight_caps *pCaps)
{
- /*
- * TODO: Implement query for extended backlight caps.
- * Some plumbing required, see amdgpu_atif_query_backlight_caps()
- */
- return false;
+ struct amdgpu_device *adev;
+ struct amdgpu_display_manager *dm;
+ int bl_index = (display == AcpiDisplayType_LCD1) ? 0 : 1;
+
+ if (!ctx || !pCaps || !ctx->driver_context)
+ return false;
+
+ adev = (struct amdgpu_device *)ctx->driver_context;
+ dm = &adev->dm;
+
+ amdgpu_dm_update_backlight_caps(dm, bl_index);
+
+ pCaps->num_data_points = dm->backlight_caps[bl_index].data_points;
+ pCaps->max_input_signal = dm->backlight_caps[bl_index].max_input_signal;
+ pCaps->min_input_signal = dm->backlight_caps[bl_index].min_input_signal;
+ pCaps->ac_level_percentage = dm->backlight_caps[bl_index].ac_level;
+ pCaps->dc_level_percentage = dm->backlight_caps[bl_index].dc_level;
+
+ if (pCaps->num_data_points > 0)
+ memcpy(pCaps->data_points, dm->backlight_caps[bl_index].luminance_data,
+ sizeof(struct dm_bl_data_point) * pCaps->num_data_points);
+ return true;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 10/19] drm/amd/display: Fix fpu guard warning
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (8 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 09/19] drm/amd/display: Add " Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-17 8:07 ` mikhail.v.gavrilov
2026-04-15 7:39 ` [PATCH 11/19] drm/amd/display: Add Replay/PSR active check in link loss status check Chenyu Chen
` (9 subsequent siblings)
19 siblings, 1 reply; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Wayne Lin, Dillon Varone,
Rafal Ostrowski, Chenyu Chen
From: Wayne Lin <Wayne.Lin@amd.com>
[Why]
Due to improper fpu guarding, we encounter this warning during boot up:
[ 10.027021] WARNING: drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.c:58 at dc_assert_fp_enabled+0x12/0x20 [amdgpu], CPU#8: (udev-worker)/469
[ 10.027644] Modules linked in: binfmt_misc snd_ctl_led nls_iso8859_1 intel_rapl_msr amd_atl intel_rapl_common amdgpu(+) snd_acp_legacy_mach snd_acp_mach snd_soc_nau8821 snd_acp3x_pdm_dma snd_acp3x_rn snd_soc_dmic snd_sof_amd_acp63 snd_sof_amd_vangogh snd_sof_amd_rembrandt snd_sof_amd_renoir snd_sof_amd_acp snd_sof_pci snd_hda_codec_alc269 snd_sof_xtensa_dsp snd_hda_scodec_component snd_hda_codec_realtek_lib snd_sof snd_hda_codec_generic snd_sof_utils snd_pci_ps snd_soc_acpi_amd_match snd_amd_sdw_acpi soundwire_amd snd_hda_codec_atihdmi soundwire_generic_allocation snd_hda_codec_hdmi soundwire_bus snd_soc_sdca edac_mce_amd snd_hda_intel snd_soc_core snd_hda_codec kvm_amd snd_compress snd_hda_core ac97_bus ee1004 amdxcp snd_pcm_dmaengine snd_intel_dspcfg snd_intel_sdw_acpi kvm drm_panel_backlight_quirks snd_rpl_pci_acp6x gpu_sched snd_hwdep snd_acp_pci irqbypass snd_amd_acpi_mach drm_buddy snd_acp_legacy_common snd_seq_midi ghash_clmulni_intel drm_ttm_helper aesni_intel snd_seq_midi_event snd_pci_acp6x joydev rapl
[ 10.027750] snd_pcm snd_rawmidi ttm snd_seq snd_pci_acp5x drm_exec drm_suballoc_helper snd_seq_device wmi_bmof snd_rn_pci_acp3x drm_display_helper snd_timer snd_acp_config cec snd_soc_acpi snd rc_core i2c_piix4 ccp snd_pci_acp3x i2c_smbus soundcore k10temp i2c_algo_bit spi_amd cdc_mbim input_leds cdc_wdm mac_hid sch_fq_codel msr parport_pc ppdev lp parport efi_pstore nfnetlink dmi_sysfs autofs4 cdc_ncm cdc_ether usbnet mii hid_logitech_hidpp hid_logitech_dj hid_generic nvme nvme_core ahci serio_raw nvme_keyring usbhid ucsi_acpi amd_xgbe nvme_auth libahci hkdf typec_ucsi video typec wmi i2c_hid_acpi i2c_hid hid
[ 10.027853] CPU: 8 UID: 0 PID: 469 Comm: (udev-worker) Not tainted 6.19.0asdn-260408-asdn #1 PREEMPT(voluntary)
[ 10.027858] Hardware name: AMD Crater-RN/Crater-RN, BIOS TCR1004A 03/12/2024
[ 10.027861] RIP: 0010:dc_assert_fp_enabled+0x12/0x20 [amdgpu]
[ 10.028416] Code: 00 00 00 00 00 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 65 8b 05 39 79 cc c4 85 c0 7e 07 31 c0 e9 9e 75 2a c3 <0f> 0b 31 c0 e9 95 75 2a c3 0f 1f 44 00 00 90 90 90 90 90 90 90 90
[ 10.028420] RSP: 0018:ffffcca10188b348 EFLAGS: 00010246
[ 10.028425] RAX: 0000000000000000 RBX: ffff88c6077f8000 RCX: 0000000000000000
[ 10.028428] RDX: ffff88c607d0e400 RSI: ffffffffc204d860 RDI: ffff88c624c00000
[ 10.028430] RBP: ffffcca10188b3e8 R08: ffff88c624c35c88 R09: 0000000000000000
[ 10.028433] R10: 0000000000000000 R11: 0000000000000000 R12: ffffcca10188b548
[ 10.028435] R13: ffff88c60be5bd00 R14: ffffffffc204d860 R15: ffff88c624c00000
[ 10.028438] FS: 00007c80c2432980(0000) GS:ffff88cdc7464000(0000) knlGS:0000000000000000
[ 10.028441] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 10.028443] CR2: 00007866ae013da8 CR3: 000000010a511000 CR4: 0000000000350ef0
[ 10.028446] Call Trace:
[ 10.028449] <TASK>
[ 10.028452] ? dcn21_update_bw_bounding_box+0x38/0xb30 [amdgpu]
[ 10.028991] ? srso_return_thunk+0x5/0x5f
[ 10.029001] dc_create+0x37c/0x730 [amdgpu]
[ 10.029505] ? srso_return_thunk+0x5/0x5f
[ 10.029512] amdgpu_dm_init+0x374/0x2ff0 [amdgpu]
[ 10.030053] ? srso_return_thunk+0x5/0x5f
[ 10.030057] ? __irq_work_queue_local+0x61/0xe0
[ 10.030063] ? srso_return_thunk+0x5/0x5f
[ 10.030067] ? irq_work_queue+0x2f/0x70
[ 10.030071] ? srso_return_thunk+0x5/0x5f
[ 10.030075] ? __wake_up_klogd+0x75/0xa0
[ 10.030081] ? srso_return_thunk+0x5/0x5f
[ 10.030085] ? vprintk_emit+0x35b/0x3f0
[ 10.030102] dm_hw_init+0x1c/0x110 [amdgpu]
[ 10.030625] amdgpu_device_init+0x23e8/0x3210 [amdgpu]
[ 10.031041] ? pci_read+0x55/0x90
[ 10.031047] ? srso_return_thunk+0x5/0x5f
[ 10.031051] ? pci_read_config_word+0x27/0x50
[ 10.031057] ? srso_return_thunk+0x5/0x5f
[ 10.031061] ? do_pci_enable_device+0x155/0x180
[ 10.031068] amdgpu_driver_load_kms+0x1a/0xd0 [amdgpu]
[ 10.031486] amdgpu_pci_probe+0x28c/0x6f0 [amdgpu]
[ 10.031902] local_pci_probe+0x47/0xb0
[ 10.031908] pci_device_probe+0xf3/0x270
[ 10.031914] really_probe+0xf1/0x410
[ 10.031920] __driver_probe_device+0x8c/0x190
[ 10.031924] driver_probe_device+0x24/0xd0
[ 10.031928] __driver_attach+0x10b/0x240
[ 10.031932] ? __pfx___driver_attach+0x10/0x10
[ 10.031936] bus_for_each_dev+0x8c/0xf0
[ 10.031942] driver_attach+0x1e/0x30
[ 10.031947] bus_add_driver+0x160/0x2a0
[ 10.031952] driver_register+0x5e/0x130
[ 10.031957] ? __pfx_amdgpu_init+0x10/0x10 [amdgpu]
[ 10.032361] __pci_register_driver+0x5e/0x70
[ 10.032366] amdgpu_init+0x5d/0xff0 [amdgpu]
[ 10.032768] ? srso_return_thunk+0x5/0x5f
[ 10.032773] do_one_initcall+0x5d/0x340
[ 10.032783] do_init_module+0x97/0x2c0
[ 10.032788] load_module+0x2b49/0x2c30
[ 10.032800] init_module_from_file+0xf4/0x120
[ 10.032804] ? init_module_from_file+0xf4/0x120
[ 10.032813] idempotent_init_module+0x10f/0x300
[ 10.032820] __x64_sys_finit_module+0x73/0xf0
[ 10.032824] ? srso_return_thunk+0x5/0x5f
[ 10.032829] x64_sys_call+0x1d68/0x26b0
[ 10.032834] do_syscall_64+0x81/0x500
[ 10.032839] ? srso_return_thunk+0x5/0x5f
[ 10.032843] ? do_syscall_64+0x2e5/0x500
[ 10.032848] ? srso_return_thunk+0x5/0x5f
[ 10.032852] ? native_flush_tlb_global+0x95/0xb0
[ 10.032860] ? srso_return_thunk+0x5/0x5f
[ 10.032864] ? __flush_tlb_all+0x13/0x60
[ 10.032870] ? srso_return_thunk+0x5/0x5f
[ 10.032874] ? do_flush_tlb_all+0xe/0x20
[ 10.032879] ? srso_return_thunk+0x5/0x5f
[ 10.032882] ? __flush_smp_call_function_queue+0x9c/0x430
[ 10.032888] ? srso_return_thunk+0x5/0x5f
[ 10.032897] ? irqentry_exit+0xb2/0x740
[ 10.032901] ? srso_return_thunk+0x5/0x5f
[ 10.032906] ? srso_return_thunk+0x5/0x5f
[ 10.032911] entry_SYSCALL_64_after_hwframe+0x76/0x7e
[ 10.032915] RIP: 0033:0x7c80c1d3490d
[ 10.032920] Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d d3 f4 0f 00 f7 d8 64 89 01 48
[ 10.032923] RSP: 002b:00007fff3a12fe28 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 10.032928] RAX: ffffffffffffffda RBX: 00005c44096804f0 RCX: 00007c80c1d3490d
[ 10.032930] RDX: 0000000000000000 RSI: 00005c4409681690 RDI: 000000000000002b
[ 10.032933] RBP: 00007fff3a12fec0 R08: 0000000000000000 R09: 00005c4409681790
[ 10.032935] R10: 0000000000000000 R11: 0000000000000246 R12: 00005c4409681690
[ 10.032937] R13: 0000000000020000 R14: 00005c44094ff7f0 R15: 00005c4409681690
[ 10.032945] </TASK>
[ 10.032948] ---[ end trace 0000000000000000 ]---
[How]
Add wrapper function to guard fpu properly for dcn21/dcn31/dcn315/dcn316.
Fixes: 1489d86d9ac9 ("drm/amd/display: Move FPU Guards From DML To DC - Part 1")
Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Reviewed-by: Rafal Ostrowski <rafal.ostrowski@amd.com>
Signed-off-by: Wayne Lin <Wayne.Lin@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 2 +-
drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h | 2 +-
drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c | 6 +++---
drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h | 6 +++---
.../gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c | 7 +++++++
.../gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c | 7 +++++++
.../drm/amd/display/dc/resource/dcn315/dcn315_resource.c | 7 +++++++
.../drm/amd/display/dc/resource/dcn316/dcn316_resource.c | 7 +++++++
8 files changed, 36 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
index ed9dd2148d86..82f50847cbac 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
@@ -2400,7 +2400,7 @@ static struct _vcs_dpi_voltage_scaling_st construct_low_pstate_lvl(struct clk_li
return low_pstate_lvl;
}
-void dcn21_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+void dcn21_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params)
{
struct _vcs_dpi_voltage_scaling_st *s = dc->scratch.update_bw_bounding_box.clock_limits;
struct dcn21_resource_pool *pool = TO_DCN21_RES_POOL(dc->res_pool);
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
index aed00039ca62..8b2226c5bbbf 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
@@ -78,7 +78,7 @@ int dcn21_populate_dml_pipes_from_context(struct dc *dc,
enum dc_validate_mode validate_mode);
bool dcn21_validate_bandwidth_fp(struct dc *dc, struct dc_state *context, enum
dc_validate_mode, display_e2e_pipe_params_st *pipes);
-void dcn21_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params);
+void dcn21_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params);
void dcn21_clk_mgr_set_bw_params_wm_table(struct clk_bw_params *bw_params);
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
index 1a28061bb9ff..ad23215da9f8 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
@@ -587,7 +587,7 @@ void dcn31_calculate_wm_and_dlg_fp(
context->bw_ctx.bw.dcn.compbuf_size_kb = context->bw_ctx.dml.ip.config_return_buffer_size_in_kbytes - total_det;
}
-void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+void dcn31_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params)
{
struct _vcs_dpi_voltage_scaling_st *s = dc->scratch.update_bw_bounding_box.clock_limits;
struct clk_limit_table *clk_table = &bw_params->clk_table;
@@ -665,7 +665,7 @@ void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
dml_init_instance(&dc->dml, &dcn3_1_soc, &dcn3_1_ip, DML_PROJECT_DCN31);
}
-void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+void dcn315_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params)
{
struct clk_limit_table *clk_table = &bw_params->clk_table;
int i, max_dispclk_mhz = 0, max_dppclk_mhz = 0;
@@ -726,7 +726,7 @@ void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_param
dml_init_instance(&dc->dml, &dcn3_15_soc, &dcn3_15_ip, DML_PROJECT_DCN315);
}
-void dcn316_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+void dcn316_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params)
{
struct _vcs_dpi_voltage_scaling_st *s = dc->scratch.update_bw_bounding_box.clock_limits;
struct clk_limit_table *clk_table = &bw_params->clk_table;
diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
index dfcc5d50071e..0b7fcbbfd17b 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
@@ -44,9 +44,9 @@ void dcn31_calculate_wm_and_dlg_fp(
int pipe_cnt,
int vlevel);
-void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params);
-void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params);
-void dcn316_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params);
+void dcn31_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params);
+void dcn315_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params);
+void dcn316_update_bw_bounding_box_fpu(struct dc *dc, struct clk_bw_params *bw_params);
int dcn_get_max_non_odm_pix_rate_100hz(struct _vcs_dpi_soc_bounding_box_st *soc);
int dcn_get_approx_det_segs_required_for_pstate(
struct _vcs_dpi_soc_bounding_box_st *soc,
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
index 89a1931b8d23..775cfa901f08 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
@@ -1395,6 +1395,13 @@ static enum dc_status dcn21_patch_unknown_plane_state(struct dc_plane_state *pla
return dcn20_patch_unknown_plane_state(plane_state);
}
+static void dcn21_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+{
+ DC_FP_START();
+ dcn21_update_bw_bounding_box_fpu(dc, bw_params);
+ DC_FP_END();
+}
+
static const struct resource_funcs dcn21_res_pool_funcs = {
.destroy = dcn21_destroy_resource_pool,
.link_enc_create = dcn21_link_encoder_create,
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
index 649b5e7c0373..200be0f46ab0 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
@@ -1858,6 +1858,13 @@ static struct dc_cap_funcs cap_funcs = {
.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
};
+static void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+{
+ DC_FP_START();
+ dcn31_update_bw_bounding_box_fpu(dc, bw_params);
+ DC_FP_END();
+}
+
static struct resource_funcs dcn31_res_pool_funcs = {
.destroy = dcn31_destroy_resource_pool,
.link_enc_create = dcn31_link_encoder_create,
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
index 1e86a5e4d113..76b112426f33 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
@@ -1853,6 +1853,13 @@ static struct dc_cap_funcs cap_funcs = {
.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
};
+static void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+{
+ DC_FP_START();
+ dcn315_update_bw_bounding_box_fpu(dc, bw_params);
+ DC_FP_END();
+}
+
static struct resource_funcs dcn315_res_pool_funcs = {
.destroy = dcn315_destroy_resource_pool,
.link_enc_create = dcn31_link_encoder_create,
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
index 6369fc90f84b..2d34db42dd83 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
@@ -1729,6 +1729,13 @@ static struct dc_cap_funcs cap_funcs = {
.get_dcc_compression_cap = dcn20_get_dcc_compression_cap
};
+static void dcn316_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
+{
+ DC_FP_START();
+ dcn316_update_bw_bounding_box_fpu(dc, bw_params);
+ DC_FP_END();
+}
+
static struct resource_funcs dcn316_res_pool_funcs = {
.destroy = dcn316_destroy_resource_pool,
.link_enc_create = dcn31_link_encoder_create,
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* Re: [PATCH 10/19] drm/amd/display: Fix fpu guard warning
2026-04-15 7:39 ` [PATCH 10/19] drm/amd/display: Fix fpu guard warning Chenyu Chen
@ 2026-04-17 8:07 ` mikhail.v.gavrilov
0 siblings, 0 replies; 22+ messages in thread
From: mikhail.v.gavrilov @ 2026-04-17 8:07 UTC (permalink / raw)
To: Chenyu Chen, amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Dillon Varone, Rafal Ostrowski
On Wed, 2026-04-15 at 15:39 +0800, Chenyu Chen wrote:
> From: Wayne Lin <Wayne.Lin@amd.com>
>
> [Why]
> Due to improper fpu guarding, we encounter this warning during boot
> up:
>
> [ 10.027021] WARNING:
> drivers/gpu/drm/amd/amdgpu/../display/amdgpu_dm/dc_fpu.c:58 at
> dc_assert_fp_enabled+0x12/0x20 [amdgpu], CPU#8: (udev-worker)/469
> [ 10.027644] Modules linked in: binfmt_misc snd_ctl_led
> nls_iso8859_1 intel_rapl_msr amd_atl intel_rapl_common amdgpu(+)
> snd_acp_legacy_mach snd_acp_mach snd_soc_nau8821 snd_acp3x_pdm_dma
> snd_acp3x_rn snd_soc_dmic snd_sof_amd_acp63 snd_sof_amd_vangogh
> snd_sof_amd_rembrandt snd_sof_amd_renoir snd_sof_amd_acp snd_sof_pci
> snd_hda_codec_alc269 snd_sof_xtensa_dsp snd_hda_scodec_component
> snd_hda_codec_realtek_lib snd_sof snd_hda_codec_generic snd_sof_utils
> snd_pci_ps snd_soc_acpi_amd_match snd_amd_sdw_acpi soundwire_amd
> snd_hda_codec_atihdmi soundwire_generic_allocation snd_hda_codec_hdmi
> soundwire_bus snd_soc_sdca edac_mce_amd snd_hda_intel snd_soc_core
> snd_hda_codec kvm_amd snd_compress snd_hda_core ac97_bus ee1004
> amdxcp snd_pcm_dmaengine snd_intel_dspcfg snd_intel_sdw_acpi kvm
> drm_panel_backlight_quirks snd_rpl_pci_acp6x gpu_sched snd_hwdep
> snd_acp_pci irqbypass snd_amd_acpi_mach drm_buddy
> snd_acp_legacy_common snd_seq_midi ghash_clmulni_intel drm_ttm_helper
> aesni_intel snd_seq_midi_event snd_pci_acp6x joydev rapl
> [ 10.027750] snd_pcm snd_rawmidi ttm snd_seq snd_pci_acp5x
> drm_exec drm_suballoc_helper snd_seq_device wmi_bmof snd_rn_pci_acp3x
> drm_display_helper snd_timer snd_acp_config cec snd_soc_acpi snd
> rc_core i2c_piix4 ccp snd_pci_acp3x i2c_smbus soundcore k10temp
> i2c_algo_bit spi_amd cdc_mbim input_leds cdc_wdm mac_hid sch_fq_codel
> msr parport_pc ppdev lp parport efi_pstore nfnetlink dmi_sysfs
> autofs4 cdc_ncm cdc_ether usbnet mii hid_logitech_hidpp
> hid_logitech_dj hid_generic nvme nvme_core ahci serio_raw
> nvme_keyring usbhid ucsi_acpi amd_xgbe nvme_auth libahci hkdf
> typec_ucsi video typec wmi i2c_hid_acpi i2c_hid hid
> [ 10.027853] CPU: 8 UID: 0 PID: 469 Comm: (udev-worker) Not tainted
> 6.19.0asdn-260408-asdn #1 PREEMPT(voluntary)
> [ 10.027858] Hardware name: AMD Crater-RN/Crater-RN, BIOS TCR1004A
> 03/12/2024
> [ 10.027861] RIP: 0010:dc_assert_fp_enabled+0x12/0x20 [amdgpu]
> [ 10.028416] Code: 00 00 00 00 00 0f 1f 00 90 90 90 90 90 90 90 90
> 90 90 90 90 90 90 90 90 65 8b 05 39 79 cc c4 85 c0 7e 07 31 c0 e9 9e
> 75 2a c3 <0f> 0b 31 c0 e9 95 75 2a c3 0f 1f 44 00 00 90 90 90 90 90
> 90 90 90
> [ 10.028420] RSP: 0018:ffffcca10188b348 EFLAGS: 00010246
> [ 10.028425] RAX: 0000000000000000 RBX: ffff88c6077f8000 RCX:
> 0000000000000000
> [ 10.028428] RDX: ffff88c607d0e400 RSI: ffffffffc204d860 RDI:
> ffff88c624c00000
> [ 10.028430] RBP: ffffcca10188b3e8 R08: ffff88c624c35c88 R09:
> 0000000000000000
> [ 10.028433] R10: 0000000000000000 R11: 0000000000000000 R12:
> ffffcca10188b548
> [ 10.028435] R13: ffff88c60be5bd00 R14: ffffffffc204d860 R15:
> ffff88c624c00000
> [ 10.028438] FS: 00007c80c2432980(0000) GS:ffff88cdc7464000(0000)
> knlGS:0000000000000000
> [ 10.028441] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 10.028443] CR2: 00007866ae013da8 CR3: 000000010a511000 CR4:
> 0000000000350ef0
> [ 10.028446] Call Trace:
> [ 10.028449] <TASK>
> [ 10.028452] ? dcn21_update_bw_bounding_box+0x38/0xb30 [amdgpu]
> [ 10.028991] ? srso_return_thunk+0x5/0x5f
> [ 10.029001] dc_create+0x37c/0x730 [amdgpu]
> [ 10.029505] ? srso_return_thunk+0x5/0x5f
> [ 10.029512] amdgpu_dm_init+0x374/0x2ff0 [amdgpu]
> [ 10.030053] ? srso_return_thunk+0x5/0x5f
> [ 10.030057] ? __irq_work_queue_local+0x61/0xe0
> [ 10.030063] ? srso_return_thunk+0x5/0x5f
> [ 10.030067] ? irq_work_queue+0x2f/0x70
> [ 10.030071] ? srso_return_thunk+0x5/0x5f
> [ 10.030075] ? __wake_up_klogd+0x75/0xa0
> [ 10.030081] ? srso_return_thunk+0x5/0x5f
> [ 10.030085] ? vprintk_emit+0x35b/0x3f0
> [ 10.030102] dm_hw_init+0x1c/0x110 [amdgpu]
> [ 10.030625] amdgpu_device_init+0x23e8/0x3210 [amdgpu]
> [ 10.031041] ? pci_read+0x55/0x90
> [ 10.031047] ? srso_return_thunk+0x5/0x5f
> [ 10.031051] ? pci_read_config_word+0x27/0x50
> [ 10.031057] ? srso_return_thunk+0x5/0x5f
> [ 10.031061] ? do_pci_enable_device+0x155/0x180
> [ 10.031068] amdgpu_driver_load_kms+0x1a/0xd0 [amdgpu]
> [ 10.031486] amdgpu_pci_probe+0x28c/0x6f0 [amdgpu]
> [ 10.031902] local_pci_probe+0x47/0xb0
> [ 10.031908] pci_device_probe+0xf3/0x270
> [ 10.031914] really_probe+0xf1/0x410
> [ 10.031920] __driver_probe_device+0x8c/0x190
> [ 10.031924] driver_probe_device+0x24/0xd0
> [ 10.031928] __driver_attach+0x10b/0x240
> [ 10.031932] ? __pfx___driver_attach+0x10/0x10
> [ 10.031936] bus_for_each_dev+0x8c/0xf0
> [ 10.031942] driver_attach+0x1e/0x30
> [ 10.031947] bus_add_driver+0x160/0x2a0
> [ 10.031952] driver_register+0x5e/0x130
> [ 10.031957] ? __pfx_amdgpu_init+0x10/0x10 [amdgpu]
> [ 10.032361] __pci_register_driver+0x5e/0x70
> [ 10.032366] amdgpu_init+0x5d/0xff0 [amdgpu]
> [ 10.032768] ? srso_return_thunk+0x5/0x5f
> [ 10.032773] do_one_initcall+0x5d/0x340
> [ 10.032783] do_init_module+0x97/0x2c0
> [ 10.032788] load_module+0x2b49/0x2c30
> [ 10.032800] init_module_from_file+0xf4/0x120
> [ 10.032804] ? init_module_from_file+0xf4/0x120
> [ 10.032813] idempotent_init_module+0x10f/0x300
> [ 10.032820] __x64_sys_finit_module+0x73/0xf0
> [ 10.032824] ? srso_return_thunk+0x5/0x5f
> [ 10.032829] x64_sys_call+0x1d68/0x26b0
> [ 10.032834] do_syscall_64+0x81/0x500
> [ 10.032839] ? srso_return_thunk+0x5/0x5f
> [ 10.032843] ? do_syscall_64+0x2e5/0x500
> [ 10.032848] ? srso_return_thunk+0x5/0x5f
> [ 10.032852] ? native_flush_tlb_global+0x95/0xb0
> [ 10.032860] ? srso_return_thunk+0x5/0x5f
> [ 10.032864] ? __flush_tlb_all+0x13/0x60
> [ 10.032870] ? srso_return_thunk+0x5/0x5f
> [ 10.032874] ? do_flush_tlb_all+0xe/0x20
> [ 10.032879] ? srso_return_thunk+0x5/0x5f
> [ 10.032882] ? __flush_smp_call_function_queue+0x9c/0x430
> [ 10.032888] ? srso_return_thunk+0x5/0x5f
> [ 10.032897] ? irqentry_exit+0xb2/0x740
> [ 10.032901] ? srso_return_thunk+0x5/0x5f
> [ 10.032906] ? srso_return_thunk+0x5/0x5f
> [ 10.032911] entry_SYSCALL_64_after_hwframe+0x76/0x7e
> [ 10.032915] RIP: 0033:0x7c80c1d3490d
> [ 10.032920] Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e
> fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24
> 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d d3 f4 0f 00 f7 d8 64
> 89 01 48
> [ 10.032923] RSP: 002b:00007fff3a12fe28 EFLAGS: 00000246 ORIG_RAX:
> 0000000000000139
> [ 10.032928] RAX: ffffffffffffffda RBX: 00005c44096804f0 RCX:
> 00007c80c1d3490d
> [ 10.032930] RDX: 0000000000000000 RSI: 00005c4409681690 RDI:
> 000000000000002b
> [ 10.032933] RBP: 00007fff3a12fec0 R08: 0000000000000000 R09:
> 00005c4409681790
> [ 10.032935] R10: 0000000000000000 R11: 0000000000000246 R12:
> 00005c4409681690
> [ 10.032937] R13: 0000000000020000 R14: 00005c44094ff7f0 R15:
> 00005c4409681690
> [ 10.032945] </TASK>
> [ 10.032948] ---[ end trace 0000000000000000 ]---
>
> [How]
> Add wrapper function to guard fpu properly for
> dcn21/dcn31/dcn315/dcn316.
>
> Fixes: 1489d86d9ac9 ("drm/amd/display: Move FPU Guards From DML To DC
> - Part 1")
>
> Reviewed-by: Dillon Varone <dillon.varone@amd.com>
> Reviewed-by: Rafal Ostrowski <rafal.ostrowski@amd.com>
> Signed-off-by: Wayne Lin <Wayne.Lin@amd.com>
> Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
> ---
> drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 2 +-
> drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h | 2 +-
> drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c | 6 +++--
> -
> drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h | 6 +++--
> -
> .../gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c | 7
> +++++++
> .../gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c | 7
> +++++++
> .../drm/amd/display/dc/resource/dcn315/dcn315_resource.c | 7
> +++++++
> .../drm/amd/display/dc/resource/dcn316/dcn316_resource.c | 7
> +++++++
> 8 files changed, 36 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
> b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
> index ed9dd2148d86..82f50847cbac 100644
> --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
> +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.c
> @@ -2400,7 +2400,7 @@ static struct _vcs_dpi_voltage_scaling_st
> construct_low_pstate_lvl(struct clk_li
> return low_pstate_lvl;
> }
>
> -void dcn21_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +void dcn21_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params)
> {
> struct _vcs_dpi_voltage_scaling_st *s = dc-
> >scratch.update_bw_bounding_box.clock_limits;
> struct dcn21_resource_pool *pool = TO_DCN21_RES_POOL(dc-
> >res_pool);
> diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
> b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
> index aed00039ca62..8b2226c5bbbf 100644
> --- a/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
> +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn20/dcn20_fpu.h
> @@ -78,7 +78,7 @@ int dcn21_populate_dml_pipes_from_context(struct dc
> *dc,
> enum dc_validate_mode
> validate_mode);
> bool dcn21_validate_bandwidth_fp(struct dc *dc, struct dc_state
> *context, enum
> dc_validate_mode,
> display_e2e_pipe_params_st *pipes);
> -void dcn21_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params);
> +void dcn21_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params);
>
> void dcn21_clk_mgr_set_bw_params_wm_table(struct clk_bw_params
> *bw_params);
>
> diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> index 1a28061bb9ff..ad23215da9f8 100644
> --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> @@ -587,7 +587,7 @@ void dcn31_calculate_wm_and_dlg_fp(
> context->bw_ctx.bw.dcn.compbuf_size_kb = context-
> >bw_ctx.dml.ip.config_return_buffer_size_in_kbytes - total_det;
> }
>
> -void dcn31_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +void dcn31_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params)
> {
> struct _vcs_dpi_voltage_scaling_st *s = dc-
> >scratch.update_bw_bounding_box.clock_limits;
> struct clk_limit_table *clk_table = &bw_params->clk_table;
> @@ -665,7 +665,7 @@ void dcn31_update_bw_bounding_box(struct dc *dc,
> struct clk_bw_params *bw_params
> dml_init_instance(&dc->dml, &dcn3_1_soc, &dcn3_1_ip,
> DML_PROJECT_DCN31);
> }
>
> -void dcn315_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +void dcn315_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params)
> {
> struct clk_limit_table *clk_table = &bw_params->clk_table;
> int i, max_dispclk_mhz = 0, max_dppclk_mhz = 0;
> @@ -726,7 +726,7 @@ void dcn315_update_bw_bounding_box(struct dc *dc,
> struct clk_bw_params *bw_param
> dml_init_instance(&dc->dml, &dcn3_15_soc, &dcn3_15_ip,
> DML_PROJECT_DCN315);
> }
>
> -void dcn316_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +void dcn316_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params)
> {
> struct _vcs_dpi_voltage_scaling_st *s = dc-
> >scratch.update_bw_bounding_box.clock_limits;
> struct clk_limit_table *clk_table = &bw_params->clk_table;
> diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
> b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
> index dfcc5d50071e..0b7fcbbfd17b 100644
> --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
> +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.h
> @@ -44,9 +44,9 @@ void dcn31_calculate_wm_and_dlg_fp(
> int pipe_cnt,
> int vlevel);
>
> -void dcn31_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params);
> -void dcn315_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params);
> -void dcn316_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params);
> +void dcn31_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params);
> +void dcn315_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params);
> +void dcn316_update_bw_bounding_box_fpu(struct dc *dc, struct
> clk_bw_params *bw_params);
> int dcn_get_max_non_odm_pix_rate_100hz(struct
> _vcs_dpi_soc_bounding_box_st *soc);
> int dcn_get_approx_det_segs_required_for_pstate(
> struct _vcs_dpi_soc_bounding_box_st *soc,
> diff --git
> a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
> b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
> index 89a1931b8d23..775cfa901f08 100644
> --- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
> @@ -1395,6 +1395,13 @@ static enum dc_status
> dcn21_patch_unknown_plane_state(struct dc_plane_state *pla
> return dcn20_patch_unknown_plane_state(plane_state);
> }
>
> +static void dcn21_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +{
> + DC_FP_START();
> + dcn21_update_bw_bounding_box_fpu(dc, bw_params);
> + DC_FP_END();
> +}
> +
> static const struct resource_funcs dcn21_res_pool_funcs = {
> .destroy = dcn21_destroy_resource_pool,
> .link_enc_create = dcn21_link_encoder_create,
> diff --git
> a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
> b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
> index 649b5e7c0373..200be0f46ab0 100644
> --- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
> @@ -1858,6 +1858,13 @@ static struct dc_cap_funcs cap_funcs = {
> .get_dcc_compression_cap = dcn20_get_dcc_compression_cap
> };
>
> +static void dcn31_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +{
> + DC_FP_START();
> + dcn31_update_bw_bounding_box_fpu(dc, bw_params);
> + DC_FP_END();
> +}
> +
> static struct resource_funcs dcn31_res_pool_funcs = {
> .destroy = dcn31_destroy_resource_pool,
> .link_enc_create = dcn31_link_encoder_create,
> diff --git
> a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
> b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
> index 1e86a5e4d113..76b112426f33 100644
> ---
> a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
> +++
> b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
> @@ -1853,6 +1853,13 @@ static struct dc_cap_funcs cap_funcs = {
> .get_dcc_compression_cap = dcn20_get_dcc_compression_cap
> };
>
> +static void dcn315_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +{
> + DC_FP_START();
> + dcn315_update_bw_bounding_box_fpu(dc, bw_params);
> + DC_FP_END();
> +}
> +
> static struct resource_funcs dcn315_res_pool_funcs = {
> .destroy = dcn315_destroy_resource_pool,
> .link_enc_create = dcn31_link_encoder_create,
> diff --git
> a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
> b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
> index 6369fc90f84b..2d34db42dd83 100644
> ---
> a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
> +++
> b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
> @@ -1729,6 +1729,13 @@ static struct dc_cap_funcs cap_funcs = {
> .get_dcc_compression_cap = dcn20_get_dcc_compression_cap
> };
>
> +static void dcn316_update_bw_bounding_box(struct dc *dc, struct
> clk_bw_params *bw_params)
> +{
> + DC_FP_START();
> + dcn316_update_bw_bounding_box_fpu(dc, bw_params);
> + DC_FP_END();
> +}
> +
> static struct resource_funcs dcn316_res_pool_funcs = {
> .destroy = dcn316_destroy_resource_pool,
> .link_enc_create = dcn31_link_encoder_create,
Confirmed this fixes the dc_assert_fp_enabled warning on Ryzen 7000
(Raphael) iGPU (dcn315). Independently arrived at the same fix for
the dcn31 family before noticing this series — dropping my duplicate
submission [1].
Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
[1]
https://lore.kernel.org/all/20260417001503.26147-1-mikhail.v.gavrilov@gmail.com
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 11/19] drm/amd/display: Add Replay/PSR active check in link loss status check
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (9 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 10/19] drm/amd/display: Fix fpu guard warning Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 12/19] drm/amd/display: Remove SYMCLK F and G values from link encoder and MANUAL_FLOW_CONTROL from optc Chenyu Chen
` (8 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Allen Li, ChunTao Tso, Allen Li,
Chenyu Chen
From: Allen Li <Allen.Li@amd.com>
[Why&How]
To avoid unnecessary link retraining when the panel is in Replay/PSR mode,
we need to check if it's in active state and ESD information before we
decide to retrain the link.
Reviewed-by: ChunTao Tso <chuntao.tso@amd.com>
Signed-off-by: Allen Li <allen.li@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../dc/link/protocols/link_dp_irq_handler.c | 57 +++++++++++--------
1 file changed, 34 insertions(+), 23 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
index 1860d44f63c1..dd19b912c48c 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_irq_handler.c
@@ -223,7 +223,7 @@ static void handle_hpd_irq_vesa_replay_sink(struct dc_link *link)
}
}
-static void handle_hpd_irq_replay_sink(struct dc_link *link, bool *need_re_enable)
+static void handle_hpd_irq_replay_sink(struct dc_link *link, bool *need_re_enable, bool *replay_esd_detection_needed)
{
union dpcd_replay_configuration replay_configuration = {0};
union dpcd_replay_configuration replay_sink_status = {0};
@@ -311,6 +311,14 @@ static void handle_hpd_irq_replay_sink(struct dc_link *link, bool *need_re_enabl
*need_re_enable = true;
}
}
+
+ if (!link->replay_settings.replay_allow_active &&
+ replay_sink_status.bits.SINK_DEVICE_REPLAY_STATUS == 0x7) {
+ /* If sink device replay status is 0x7 and replay is disabled,
+ * it means sink is in a bad state and link retraining is needed to recover
+ */
+ *replay_esd_detection_needed = true;
+ }
}
void dp_handle_link_loss(struct dc_link *link)
@@ -469,6 +477,7 @@ bool dp_handle_hpd_rx_irq(struct dc_link *link,
enum dc_status result;
bool status = false;
bool replay_re_enable_needed = false;
+ bool replay_esd_detection_needed = false;
if (out_link_loss)
*out_link_loss = false;
@@ -482,6 +491,7 @@ bool dp_handle_hpd_rx_irq(struct dc_link *link,
DC_LOG_HW_HPD_IRQ("%s: Got short pulse HPD on link %d\n",
__func__, link->link_index);
+ handle_hpd_irq_replay_sink(link, &replay_re_enable_needed, &replay_esd_detection_needed);
/* All the "handle_hpd_irq_xxx()" methods
* should be called only after
@@ -528,8 +538,6 @@ bool dp_handle_hpd_rx_irq(struct dc_link *link,
/* PSR-related error was detected and handled */
return true;
- handle_hpd_irq_replay_sink(link, &replay_re_enable_needed);
-
/* If PSR-related error handled, Main link may be off,
* so do not handle as a normal sink status change interrupt.
*/
@@ -552,27 +560,30 @@ bool dp_handle_hpd_rx_irq(struct dc_link *link,
* Downstream port status changed,
* then DM should call DC to do the detection.
* NOTE: Now includes eDP link loss detection and retraining
+ * Link will be retrained if panel is not EDP or
+ * Replay ESD recovery is needed.
*/
-
- if (dp_parse_link_loss_status(
- link,
- &hpd_irq_dpcd_data)) {
- /* Connectivity log: link loss */
- CONN_DATA_LINK_LOSS(link,
- hpd_irq_dpcd_data.raw,
- sizeof(hpd_irq_dpcd_data),
- "Status: ");
-
- if (defer_handling && has_left_work)
- *has_left_work = true;
- else
- dp_handle_link_loss(link);
-
- status = false;
- if (out_link_loss)
- *out_link_loss = true;
-
- dp_trace_link_loss_increment(link);
+ if (link->connector_signal != SIGNAL_TYPE_EDP || replay_esd_detection_needed) {
+ if (dp_parse_link_loss_status(
+ link,
+ &hpd_irq_dpcd_data)) {
+ /* Connectivity log: link loss */
+ CONN_DATA_LINK_LOSS(link,
+ hpd_irq_dpcd_data.raw,
+ sizeof(hpd_irq_dpcd_data),
+ "Status: ");
+
+ if (defer_handling && has_left_work)
+ *has_left_work = true;
+ else
+ dp_handle_link_loss(link);
+
+ status = false;
+ if (out_link_loss)
+ *out_link_loss = true;
+
+ dp_trace_link_loss_increment(link);
+ }
}
if (link->dpcd_caps.usb4_dp_tun_info.dp_tun_cap.bits.dp_tunneling) {
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 12/19] drm/amd/display: Remove SYMCLK F and G values from link encoder and MANUAL_FLOW_CONTROL from optc
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (10 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 11/19] drm/amd/display: Add Replay/PSR active check in link loss status check Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 13/19] drm/amd/display: Add minimum vfp requirement Chenyu Chen
` (7 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Andrew Lichmanov, Charlene Liu,
Chenyu Chen
From: Andrew Lichmanov <Andrew.Lichmanov@amd.com>
[WHY]
Definitions were removed by HW from new headers.
Reviewed-by: Charlene Liu <charlene.liu@amd.com>
Signed-off-by: Andrew Lichmanov <Andrew.Lichmanov@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_smu.c | 1 -
.../gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_link_encoder.h | 2 --
drivers/gpu/drm/amd/display/dc/optc/dcn42/dcn42_optc.h | 1 -
3 files changed, 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_smu.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_smu.c
index c791bb1edb47..6d0012b7d6dc 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_smu.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_smu.c
@@ -428,4 +428,3 @@ void dcn42_smu_set_dtbclk(struct clk_mgr_internal *clk_mgr, bool enable)
enable);
smu_print("%s: smu_set_dtbclk = %d\n", __func__, enable ? 1 : 0);
}
-
diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_link_encoder.h b/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_link_encoder.h
index 4b5a9594f279..9c607b24ec1c 100644
--- a/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_link_encoder.h
+++ b/drivers/gpu/drm/amd/display/dc/dio/dcn42/dcn42_dio_link_encoder.h
@@ -121,8 +121,6 @@
LE_SF(DIO_CLK_CNTL, SYMCLKC_G_HDCP_GATE_DIS, mask_sh),\
LE_SF(DIO_CLK_CNTL, SYMCLKD_G_HDCP_GATE_DIS, mask_sh),\
LE_SF(DIO_CLK_CNTL, SYMCLKE_G_HDCP_GATE_DIS, mask_sh),\
- LE_SF(DIO_CLK_CNTL, SYMCLKF_G_HDCP_GATE_DIS, mask_sh),\
- LE_SF(DIO_CLK_CNTL, SYMCLKG_G_HDCP_GATE_DIS, mask_sh)
void dcn42_link_encoder_construct(
struct dcn20_link_encoder *enc20,
diff --git a/drivers/gpu/drm/amd/display/dc/optc/dcn42/dcn42_optc.h b/drivers/gpu/drm/amd/display/dc/optc/dcn42/dcn42_optc.h
index fc7192f01b33..8e7d65317e7c 100644
--- a/drivers/gpu/drm/amd/display/dc/optc/dcn42/dcn42_optc.h
+++ b/drivers/gpu/drm/amd/display/dc/optc/dcn42/dcn42_optc.h
@@ -164,7 +164,6 @@
SF(GSL_SOURCE_SELECT, GSL0_READY_SOURCE_SEL, mask_sh),\
SF(GSL_SOURCE_SELECT, GSL1_READY_SOURCE_SEL, mask_sh),\
SF(GSL_SOURCE_SELECT, GSL2_READY_SOURCE_SEL, mask_sh),\
- SF(OTG0_OTG_GLOBAL_CONTROL2, MANUAL_FLOW_CONTROL_SEL, mask_sh),\
SF(OTG0_OTG_GLOBAL_CONTROL2, GLOBAL_UPDATE_LOCK_EN, mask_sh),\
SF(OTG0_OTG_GSL_WINDOW_X, OTG_GSL_WINDOW_START_X, mask_sh),\
SF(OTG0_OTG_GSL_WINDOW_X, OTG_GSL_WINDOW_END_X, mask_sh), \
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 13/19] drm/amd/display: Add minimum vfp requirement
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (11 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 12/19] drm/amd/display: Remove SYMCLK F and G values from link encoder and MANUAL_FLOW_CONTROL from optc Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 14/19] drm/amd/display: Fix narrowing boundaries and eDP parser assignment Chenyu Chen
` (6 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Dillon Varone, Austin Zheng,
Chenyu Chen
From: Dillon Varone <Dillon.Varone@amd.com>
[WHY&HOW]
Vertical front porch (vfp) must be greater than 1, and must be patched
if it isn't. This must be done pre-DML so the DLG programming remains
consistent with the OTG programming.
Reviewed-by: Austin Zheng <austin.zheng@amd.com>
Signed-off-by: Dillon Varone <Dillon.Varone@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../amd/display/dc/dml2_0/dml21/dml21_translation_helper.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c
index 476030193f14..9031fd582ec7 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c
@@ -90,6 +90,8 @@ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cf
struct pipe_ctx *pipe_ctx,
struct dml2_context *dml_ctx)
{
+ const unsigned int min_v_front_porch = (stream->timing.flags.INTERLACE != 0) ? 2 : 1;
+
unsigned int hblank_start, vblank_start;
uint64_t min_hardware_refresh_in_uhz;
uint32_t pix_clk_100hz;
@@ -97,7 +99,8 @@ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cf
timing->h_active = stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right + pipe_ctx->dsc_padding_params.dsc_hactive_padding;
timing->v_active = stream->timing.v_addressable + stream->timing.v_border_bottom + stream->timing.v_border_top;
timing->h_front_porch = stream->timing.h_front_porch;
- timing->v_front_porch = stream->timing.v_front_porch;
+ timing->v_front_porch = stream->timing.v_front_porch > min_v_front_porch ?
+ stream->timing.v_front_porch : min_v_front_porch;
timing->pixel_clock_khz = stream->timing.pix_clk_100hz / 10;
if (pipe_ctx->dsc_padding_params.dsc_hactive_padding != 0)
timing->pixel_clock_khz = pipe_ctx->dsc_padding_params.dsc_pix_clk_100hz / 10;
@@ -116,7 +119,7 @@ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cf
if (hblank_start < stream->timing.h_addressable)
timing->h_blank_end = 0;
- vblank_start = stream->timing.v_total - stream->timing.v_front_porch;
+ vblank_start = timing->v_total - timing->v_front_porch;
timing->v_blank_end = vblank_start - stream->timing.v_addressable
- stream->timing.v_border_top - stream->timing.v_border_bottom;
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 14/19] drm/amd/display: Fix narrowing boundaries and eDP parser assignment
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (12 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 13/19] drm/amd/display: Add minimum vfp requirement Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 15/19] drm/amd/display: Fix dml2_0 narrowing boundaries Chenyu Chen
` (5 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Gaghik Khachatrian, Dillon Varone,
Chenyu Chen
From: Gaghik Khachatrian <gaghik.khachatrian@amd.com>
[Why] drm/amd/display had implicit integer narrowing at protocol/storage boundaries
and an incomplete eDP assignment in integrated info parsing.
[How] Apply explicit boundary casts for intentional narrowing, keep intermediate math
in wider types, and restore explicit eDP field mapping in v2.2 parser.
Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Gaghik Khachatrian <gaghik.khachatrian@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../gpu/drm/amd/display/dc/bios/bios_parser2.c | 1 +
.../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 3 ++-
.../amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c | 2 +-
.../amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c | 2 +-
.../display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c | 2 +-
.../display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c | 2 +-
.../display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c | 2 +-
.../amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c | 2 +-
.../amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c | 2 +-
.../gpu/drm/amd/display/dc/core/dc_resource.c | 14 +++++++-------
drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 7 ++++---
drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c | 16 ++++++++--------
.../gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c | 4 ++--
drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c | 3 ++-
.../amd/display/dc/hwss/dce110/dce110_hwseq.c | 4 ++--
.../display/dc/irq/dce110/irq_service_dce110.c | 2 +-
drivers/gpu/drm/amd/display/dc/link/link_dpms.c | 10 +++++-----
.../amd/display/dc/link/protocols/link_dp_dpia.c | 2 +-
.../display/dc/link/protocols/link_dp_dpia_bw.c | 6 +++---
.../dc/link/protocols/link_dp_panel_replay.c | 4 ++--
.../dc/resource/dcn32/dcn32_resource_helpers.c | 2 +-
.../dcn401/dcn401_soc_and_ip_translator.c | 14 +++++++-------
.../display/modules/info_packet/info_packet.c | 2 +-
.../amd/display/modules/power/power_helpers.c | 16 ++++++++--------
24 files changed, 64 insertions(+), 60 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
index dd45cc170fc7..b4dd8219b8f0 100644
--- a/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
+++ b/drivers/gpu/drm/amd/display/dc/bios/bios_parser2.c
@@ -2977,6 +2977,7 @@ static enum bp_result get_integrated_info_v2_2(
info->edp1_info.edp_panel_bpc =
info_v2_2->edp1_info.edp_panel_bpc;
info->edp1_info.edp_bootup_bl_level =
+ info_v2_2->edp1_info.edp_bootup_bl_level;
info->edp2_info.edp_backlight_pwm_hz =
le16_to_cpu(info_v2_2->edp2_info.edp_backlight_pwm_hz);
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
index 79eb5ae8ec6f..df06e5cd27aa 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
@@ -476,7 +476,8 @@ static void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_sm
ranges->reader_wm_sets[num_valid_sets].min_drain_clk_mhz = 0;
else {
/* add 1 to make it non-overlapping with next lvl */
- ranges->reader_wm_sets[num_valid_sets].min_drain_clk_mhz = bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ ranges->reader_wm_sets[num_valid_sets].min_drain_clk_mhz =
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
ranges->reader_wm_sets[num_valid_sets].max_drain_clk_mhz =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
index caa15cfba7c3..70f6f0913f13 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn301/vg_clk_mgr.c
@@ -409,7 +409,7 @@ static void vg_build_watermark_ranges(struct clk_bw_params *bw_params, struct wa
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
index 1d94c4bae9de..68a121dbb489 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
@@ -447,7 +447,7 @@ static void dcn31_build_watermark_ranges(struct clk_bw_params *bw_params, struct
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
index 1814ec248dab..0d5892266112 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
@@ -518,7 +518,7 @@ static void dcn314_build_watermark_ranges(struct clk_bw_params *bw_params, struc
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
index 382e1b891c47..ef184f28e426 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
@@ -408,7 +408,7 @@ static void dcn315_build_watermark_ranges(struct clk_bw_params *bw_params, struc
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
index a162a453447c..aa8f2a5edc21 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn316/dcn316_clk_mgr.c
@@ -374,7 +374,7 @@ static void dcn316_build_watermark_ranges(struct clk_bw_params *bw_params, struc
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
index 688a4bdc20b5..ddcde2433211 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
@@ -886,7 +886,7 @@ static void dcn35_build_watermark_ranges(struct clk_bw_params *bw_params, struct
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
index e39fd97b3ffd..a0cdaf69056e 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c
@@ -657,7 +657,7 @@ void dcn42_build_watermark_ranges(struct clk_bw_params *bw_params, struct dcn42_
else {
/* add 1 to make it non-overlapping with next lvl */
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MinMclk =
- bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1;
+ (uint16_t)(bw_params->clk_table.entries[i - 1].dcfclk_mhz + 1);
}
table->WatermarkRow[WM_DCFCLK][num_valid_sets].MaxMclk =
(uint16_t)bw_params->clk_table.entries[i].dcfclk_mhz;
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
index 20600455ff63..19526a278b2a 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
@@ -4724,10 +4724,10 @@ static void set_avi_info_frame(
* barLeft: Pixel Number of End of Left Bar.
* barRight: Pixel Number of Start of Right Bar. */
hdmi_info.bits.bar_top = (uint16_t)stream->timing.v_border_top;
- hdmi_info.bits.bar_bottom = (stream->timing.v_total
+ hdmi_info.bits.bar_bottom = (uint16_t)(stream->timing.v_total
- stream->timing.v_border_bottom + 1);
hdmi_info.bits.bar_left = (uint16_t)stream->timing.h_border_left;
- hdmi_info.bits.bar_right = (stream->timing.h_total
+ hdmi_info.bits.bar_right = (uint16_t)(stream->timing.h_total
- stream->timing.h_border_right + 1);
/* Additional Colorimetry Extension
@@ -5363,7 +5363,7 @@ bool get_temp_dp_link_res(struct dc_link *link,
void reset_syncd_pipes_from_disabled_pipes(struct dc *dc,
struct dc_state *context)
{
- int i, j;
+ uint8_t i, j;
struct pipe_ctx *pipe_ctx_old, *pipe_ctx, *pipe_ctx_syncd;
/* If pipe backend is reset, need to reset pipe syncd status */
@@ -5426,7 +5426,7 @@ void reset_sync_context_for_pipe(const struct dc *dc,
struct dc_state *context,
uint8_t pipe_idx)
{
- int i;
+ uint8_t i;
struct pipe_ctx *pipe_ctx_reset;
/* reset the otg sync context for the pipe and its slave pipes if any */
@@ -5442,7 +5442,7 @@ void reset_sync_context_for_pipe(const struct dc *dc,
uint8_t resource_transmitter_to_phy_idx(const struct dc *dc, enum transmitter transmitter)
{
/* TODO - get transmitter to phy idx mapping from DMUB */
- uint8_t phy_idx = transmitter - TRANSMITTER_UNIPHY_A;
+ uint8_t phy_idx = (uint8_t)(transmitter - TRANSMITTER_UNIPHY_A);
if (dc->ctx->dce_version == DCN_VERSION_3_1 &&
dc->ctx->asic_id.hw_internal_rev == YELLOW_CARP_B0) {
@@ -5509,8 +5509,8 @@ const struct link_hwss *get_link_hwss(const struct dc_link *link,
bool is_h_timing_divisible_by_2(struct dc_stream_state *stream)
{
bool divisible = false;
- uint16_t h_blank_start = 0;
- uint16_t h_blank_end = 0;
+ uint32_t h_blank_start = 0;
+ uint32_t h_blank_end = 0;
if (stream) {
h_blank_start = stream->timing.h_total - stream->timing.h_front_porch;
diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
index 317c69719313..0dd6d1463137 100644
--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
@@ -697,7 +697,8 @@ static void populate_subvp_cmd_vblank_pipe_info(struct dc *dc,
pipe_data->pipe_config.vblank_data.vblank_pipe_index = vblank_pipe->pipe_idx;
pipe_data->pipe_config.vblank_data.vstartup_start = (uint16_t)vblank_pipe->pipe_dlg_param.vstartup_start;
pipe_data->pipe_config.vblank_data.vblank_end =
- vblank_pipe->stream->timing.v_total - vblank_pipe->stream->timing.v_front_porch - vblank_pipe->stream->timing.v_addressable;
+ (uint16_t)(vblank_pipe->stream->timing.v_total -
+ vblank_pipe->stream->timing.v_front_porch - vblank_pipe->stream->timing.v_addressable);
if (vblank_pipe->stream->ignore_msa_timing_param &&
(vblank_pipe->stream->allow_freesync || vblank_pipe->stream->vrr_active_variable || vblank_pipe->stream->vrr_active_fixed))
@@ -831,7 +832,7 @@ static void populate_subvp_cmd_pipe_info(struct dc *dc,
// Prefetch lines is equal to VACTIVE + BP + VSYNC
pipe_data->pipe_config.subvp_data.prefetch_lines =
- phantom_timing->v_total - phantom_timing->v_front_porch;
+ (uint16_t)(phantom_timing->v_total - phantom_timing->v_front_porch);
// Round up
pipe_data->pipe_config.subvp_data.prefetch_to_mall_start_lines =
@@ -1811,7 +1812,7 @@ static void dc_dmub_srv_rb_based_fams2_update_config(struct dc *dc,
struct dc_state *context,
bool enable)
{
- uint8_t num_cmds = 1;
+ uint32_t num_cmds = 1;
uint32_t i;
union dmub_rb_cmd cmd[2 * MAX_STREAMS + 1];
struct dmub_rb_cmd_fams2 *global_cmd = &cmd[0].fams2_config;
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c
index 52e05b9185f1..e64b3b6bff5c 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_i2c_sw.c
@@ -75,7 +75,7 @@ static void release_engine_dce_sw(
static bool wait_for_scl_high_sw(
struct dc_context *ctx,
struct ddc *ddc,
- uint16_t clock_delay_div_4)
+ uint32_t clock_delay_div_4)
{
(void)ctx;
uint32_t scl_retry = 0;
@@ -97,7 +97,7 @@ static bool wait_for_scl_high_sw(
static bool write_byte_sw(
struct dc_context *ctx,
struct ddc *ddc_handle,
- uint16_t clock_delay_div_4,
+ uint32_t clock_delay_div_4,
uint8_t byte)
{
int32_t shift = 7;
@@ -154,7 +154,7 @@ static bool write_byte_sw(
static bool read_byte_sw(
struct dc_context *ctx,
struct ddc *ddc_handle,
- uint16_t clock_delay_div_4,
+ uint32_t clock_delay_div_4,
uint8_t *byte,
bool more)
{
@@ -214,7 +214,7 @@ static bool read_byte_sw(
static bool stop_sync_sw(
struct dc_context *ctx,
struct ddc *ddc_handle,
- uint16_t clock_delay_div_4)
+ uint32_t clock_delay_div_4)
{
uint32_t retry = 0;
@@ -251,7 +251,7 @@ static bool stop_sync_sw(
static bool i2c_write_sw(
struct dc_context *ctx,
struct ddc *ddc_handle,
- uint16_t clock_delay_div_4,
+ uint32_t clock_delay_div_4,
uint8_t address,
uint32_t length,
const uint8_t *data)
@@ -273,7 +273,7 @@ static bool i2c_write_sw(
static bool i2c_read_sw(
struct dc_context *ctx,
struct ddc *ddc_handle,
- uint16_t clock_delay_div_4,
+ uint32_t clock_delay_div_4,
uint8_t address,
uint32_t length,
uint8_t *data)
@@ -298,7 +298,7 @@ static bool i2c_read_sw(
static bool start_sync_sw(
struct dc_context *ctx,
struct ddc *ddc_handle,
- uint16_t clock_delay_div_4)
+ uint32_t clock_delay_div_4)
{
uint32_t retry = 0;
@@ -399,7 +399,7 @@ static void dce_i2c_sw_engine_submit_channel_request(struct dce_i2c_sw *engine,
struct i2c_request_transaction_data *req)
{
struct ddc *ddc = engine->ddc;
- uint16_t clock_delay_div_4 = engine->clock_delay >> 2;
+ uint32_t clock_delay_div_4 = engine->clock_delay >> 2;
/* send sync (start / repeated start) */
diff --git a/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c b/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c
index 9e63d075c1cf..9c326ad1d3b1 100644
--- a/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c
+++ b/drivers/gpu/drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c
@@ -415,10 +415,10 @@ bool dsc_prepare_config(const struct dsc_config *dsc_cfg, struct dsc_reg_values
dsc_reg_vals->ich_reset_at_eol = (dsc_cfg->is_odm || dsc_reg_vals->num_slices_h > 1) ? 0xF : 0;
// Need to find the ceiling value for the slice width
- dsc_reg_vals->pps.slice_width = (dsc_cfg->pic_width + dsc_cfg->dsc_padding + dsc_cfg->dc_dsc_cfg.num_slices_h - 1) / dsc_cfg->dc_dsc_cfg.num_slices_h;
+ dsc_reg_vals->pps.slice_width = (u16)((dsc_cfg->pic_width + dsc_cfg->dsc_padding + dsc_cfg->dc_dsc_cfg.num_slices_h - 1) / dsc_cfg->dc_dsc_cfg.num_slices_h);
// TODO: in addition to validating slice height (pic height must be divisible by slice height),
// see what happens when the same condition doesn't apply for slice_width/pic_width.
- dsc_reg_vals->pps.slice_height = dsc_cfg->pic_height / dsc_cfg->dc_dsc_cfg.num_slices_v;
+ dsc_reg_vals->pps.slice_height = (u16)(dsc_cfg->pic_height / dsc_cfg->dc_dsc_cfg.num_slices_v);
ASSERT(dsc_reg_vals->pps.slice_height * dsc_cfg->dc_dsc_cfg.num_slices_v == dsc_cfg->pic_height);
if (!(dsc_reg_vals->pps.slice_height * dsc_cfg->dc_dsc_cfg.num_slices_v == dsc_cfg->pic_height)) {
diff --git a/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c b/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c
index a34031b5c9d5..6c07d9a87bfe 100644
--- a/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c
+++ b/drivers/gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c
@@ -103,7 +103,8 @@ int dscc_compute_dsc_parameters(const struct drm_dsc_config *pps,
struct drm_dsc_config dsc_cfg;
dsc_params->pps = *pps;
- dsc_params->pps.initial_scale_value = 8 * rc->rc_model_size / (rc->rc_model_size - rc->initial_fullness_offset);
+ dsc_params->pps.initial_scale_value = (u8)(8 * rc->rc_model_size /
+ (rc->rc_model_size - rc->initial_fullness_offset));
copy_pps_fields(&dsc_cfg, &dsc_params->pps);
copy_rc_to_cfg(&dsc_cfg, rc);
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
index f2ac516b685f..7af239524d71 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
@@ -1443,8 +1443,8 @@ void build_audio_output(
(stream->timing.flags.INTERLACE != 0);
audio_output->crtc_info.refresh_rate =
- (stream->timing.pix_clk_100hz*100)/
- (stream->timing.h_total*stream->timing.v_total);
+ (uint16_t)((stream->timing.pix_clk_100hz*100)/
+ (stream->timing.h_total*stream->timing.v_total));
audio_output->crtc_info.color_depth =
stream->timing.display_color_depth;
diff --git a/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c b/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
index 65a98400c486..0ac8a2e8380c 100644
--- a/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
+++ b/drivers/gpu/drm/amd/display/dc/irq/dce110/irq_service_dce110.c
@@ -210,7 +210,7 @@ bool dce110_vblank_set(struct irq_service *irq_service,
dc_interrupt_to_irq_source(irq_service->ctx->dc,
info->src_id,
info->ext_id);
- uint8_t pipe_offset = dal_irq_src - IRQ_TYPE_VBLANK;
+ unsigned int pipe_offset = dal_irq_src - IRQ_TYPE_VBLANK;
struct timing_generator *tg;
diff --git a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
index f7cc419cfbff..e7d3f9bd8aa5 100644
--- a/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
+++ b/drivers/gpu/drm/amd/display/dc/link/link_dpms.c
@@ -546,24 +546,24 @@ static void update_psp_stream_config(struct pipe_ctx *pipe_ctx, bool dpms_off)
config.dig_fe = (uint8_t) pipe_ctx->stream_res.stream_enc->stream_enc_inst;
/* stream encoder index */
- config.stream_enc_idx = pipe_ctx->stream_res.stream_enc->id - ENGINE_ID_DIGA;
+ config.stream_enc_idx = (uint8_t)(pipe_ctx->stream_res.stream_enc->id - ENGINE_ID_DIGA);
if (dp_is_128b_132b_signal(pipe_ctx))
config.stream_enc_idx =
- pipe_ctx->stream_res.hpo_dp_stream_enc->id - ENGINE_ID_HPO_DP_0;
+ (uint8_t)(pipe_ctx->stream_res.hpo_dp_stream_enc->id - ENGINE_ID_HPO_DP_0);
/* dig back end */
config.dig_be = pipe_ctx->stream->link->link_enc_hw_inst;
/* link encoder index */
- config.link_enc_idx = link_enc->transmitter - TRANSMITTER_UNIPHY_A;
+ config.link_enc_idx = (uint8_t)(link_enc->transmitter - TRANSMITTER_UNIPHY_A);
if (dp_is_128b_132b_signal(pipe_ctx))
config.link_enc_idx = (uint8_t)pipe_ctx->link_res.hpo_dp_link_enc->inst;
/* dio output index is dpia index for DPIA endpoint & dcio index by default */
if (pipe_ctx->stream->link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA)
- config.dio_output_idx = pipe_ctx->stream->link->link_id.enum_id - ENUM_ID_1;
+ config.dio_output_idx = (uint8_t)(pipe_ctx->stream->link->link_id.enum_id - ENUM_ID_1);
else
- config.dio_output_idx = link_enc->transmitter - TRANSMITTER_UNIPHY_A;
+ config.dio_output_idx = (uint8_t)(link_enc->transmitter - TRANSMITTER_UNIPHY_A);
/* phy index */
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
index 766b54631c79..da227889e007 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia.c
@@ -119,7 +119,7 @@ bool dpia_query_hpd_status(struct dc_link *link)
/* prepare QUERY_HPD command */
cmd.query_hpd.header.type = DMUB_CMD__QUERY_HPD_STATE;
cmd.query_hpd.header.payload_bytes = sizeof(cmd.query_hpd.data);
- cmd.query_hpd.data.instance = link->link_id.enum_id - ENUM_ID_1;
+ cmd.query_hpd.data.instance = (uint8_t)(link->link_id.enum_id - ENUM_ID_1);
cmd.query_hpd.data.ch_type = AUX_CHANNEL_DPIA;
/* Query dpia hpd status from dmub */
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
index 6406fe890850..d79c18c4903a 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_dpia_bw.c
@@ -180,7 +180,7 @@ static void dpia_bw_alloc_unplug(struct dc_link *link)
static void link_dpia_send_bw_alloc_request(struct dc_link *link, int req_bw)
{
- uint8_t request_reg_val;
+ uint32_t request_reg_val;
uint32_t temp, request_bw;
if (link->dpia_bw_alloc_config.bw_granularity == 0) {
@@ -212,8 +212,8 @@ static void link_dpia_send_bw_alloc_request(struct dc_link *link, int req_bw)
link->dpia_bw_alloc_config.allocated_bw = request_bw;
DC_LOG_DC("%s: Link[%d]: Request BW: %d", __func__, link->link_index, request_bw);
- core_link_write_dpcd(link, REQUESTED_BW,
- &request_reg_val,
+ uint8_t requested_bw_dpcd = (uint8_t)request_reg_val;
+ core_link_write_dpcd(link, REQUESTED_BW, &requested_bw_dpcd,
sizeof(uint8_t));
}
diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c
index e1991776c59d..72d6e6011a09 100644
--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c
+++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_panel_replay.c
@@ -328,9 +328,9 @@ bool dp_pr_copy_settings(struct dc_link *link, struct replay_context *replay_con
link->dpcd_caps.vesa_replay_su_info.pr_su_y_granularity_extended_caps;
if (pipe_ctx->stream->timing.dsc_cfg.num_slices_v > 0)
- cmd.pr_copy_settings.data.dsc_slice_height = (pipe_ctx->stream->timing.v_addressable +
+ cmd.pr_copy_settings.data.dsc_slice_height = (uint16_t)((pipe_ctx->stream->timing.v_addressable +
pipe_ctx->stream->timing.v_border_top + pipe_ctx->stream->timing.v_border_bottom) /
- pipe_ctx->stream->timing.dsc_cfg.num_slices_v;
+ pipe_ctx->stream->timing.dsc_cfg.num_slices_v);
if (dc_is_embedded_signal(link->connector_signal))
cmd.pr_copy_settings.data.main_link_activity_option = OPTION_1C;
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c
index 4808c793590f..b2eac83ef02c 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource_helpers.c
@@ -333,7 +333,7 @@ void dcn32_determine_det_override(struct dc *dc,
continue;
if (context->stream_status[i].plane_count > 0)
- plane_segments = stream_segments / context->stream_status[i].plane_count;
+ plane_segments = (uint8_t)(stream_segments / context->stream_status[i].plane_count);
else
plane_segments = stream_segments;
for (j = 0; j < dc->res_pool->pipe_count; j++) {
diff --git a/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c b/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c
index e4811c3728a9..89f7ccd7f81f 100644
--- a/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c
+++ b/drivers/gpu/drm/amd/display/dc/soc_and_ip_translator/dcn401/dcn401_soc_and_ip_translator.c
@@ -49,7 +49,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].dcfclk_mhz > dc_bw_params->dc_mode_limit.dcfclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].dcfclk_mhz < dc_bw_params->dc_mode_limit.dcfclk_mhz) {
dml_clk_table->dcfclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.dcfclk_mhz * 1000;
- dml_clk_table->dcfclk.num_clk_values = i + 1;
+ dml_clk_table->dcfclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->dcfclk.clk_values_khz[i] = 0;
dml_clk_table->dcfclk.num_clk_values = (uint8_t)i;
@@ -72,7 +72,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].fclk_mhz > dc_bw_params->dc_mode_limit.fclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].fclk_mhz < dc_bw_params->dc_mode_limit.fclk_mhz) {
dml_clk_table->fclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.fclk_mhz * 1000;
- dml_clk_table->fclk.num_clk_values = i + 1;
+ dml_clk_table->fclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->fclk.clk_values_khz[i] = 0;
dml_clk_table->fclk.num_clk_values = (uint8_t)i;
@@ -95,7 +95,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].memclk_mhz > dc_bw_params->dc_mode_limit.memclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].memclk_mhz < dc_bw_params->dc_mode_limit.memclk_mhz) {
dml_clk_table->uclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.memclk_mhz * 1000;
- dml_clk_table->uclk.num_clk_values = i + 1;
+ dml_clk_table->uclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->uclk.clk_values_khz[i] = 0;
dml_clk_table->uclk.num_clk_values = (uint8_t)i;
@@ -121,7 +121,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].dispclk_mhz > dc_bw_params->dc_mode_limit.dispclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].dispclk_mhz < dc_bw_params->dc_mode_limit.dispclk_mhz) {
dml_clk_table->dispclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.dispclk_mhz * 1000;
- dml_clk_table->dispclk.num_clk_values = i + 1;
+ dml_clk_table->dispclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->dispclk.clk_values_khz[i] = 0;
dml_clk_table->dispclk.num_clk_values = (uint8_t)i;
@@ -144,7 +144,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].dppclk_mhz > dc_bw_params->dc_mode_limit.dppclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].dppclk_mhz < dc_bw_params->dc_mode_limit.dppclk_mhz) {
dml_clk_table->dppclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.dppclk_mhz * 1000;
- dml_clk_table->dppclk.num_clk_values = i + 1;
+ dml_clk_table->dppclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->dppclk.clk_values_khz[i] = 0;
dml_clk_table->dppclk.num_clk_values = (uint8_t)i;
@@ -167,7 +167,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].dtbclk_mhz > dc_bw_params->dc_mode_limit.dtbclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].dtbclk_mhz < dc_bw_params->dc_mode_limit.dtbclk_mhz) {
dml_clk_table->dtbclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.dtbclk_mhz * 1000;
- dml_clk_table->dtbclk.num_clk_values = i + 1;
+ dml_clk_table->dtbclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->dtbclk.clk_values_khz[i] = 0;
dml_clk_table->dtbclk.num_clk_values = (uint8_t)i;
@@ -190,7 +190,7 @@ static void dcn401_convert_dc_clock_table_to_soc_bb_clock_table(
dc_clk_table->entries[i].socclk_mhz > dc_bw_params->dc_mode_limit.socclk_mhz) {
if (i == 0 || dc_clk_table->entries[i-1].socclk_mhz < dc_bw_params->dc_mode_limit.socclk_mhz) {
dml_clk_table->socclk.clk_values_khz[i] = dc_bw_params->dc_mode_limit.socclk_mhz * 1000;
- dml_clk_table->socclk.num_clk_values = i + 1;
+ dml_clk_table->socclk.num_clk_values = (uint8_t)(i + 1);
} else {
dml_clk_table->socclk.clk_values_khz[i] = 0;
dml_clk_table->socclk.num_clk_values = (uint8_t)i;
diff --git a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
index 00473c6284d5..55c7250f18d8 100644
--- a/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
+++ b/drivers/gpu/drm/amd/display/modules/info_packet/info_packet.c
@@ -246,7 +246,7 @@ void set_vsc_packet_colorimetry_data(
break;
}
- info_packet->sb[16] = (pixelEncoding << 4) | colorimetryFormat;
+ info_packet->sb[16] = (uint8_t)((pixelEncoding << 4) | colorimetryFormat);
/* Set color depth */
switch (stream->timing.display_color_depth) {
diff --git a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
index 5d444e9eb38f..f8b763db9b8c 100644
--- a/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
+++ b/drivers/gpu/drm/amd/display/modules/power/power_helpers.c
@@ -647,12 +647,12 @@ static void fill_iram_v_2_3(struct iram_table_v_2_2 *ram_table, struct dmcu_iram
unsigned int set = params.set;
ram_table->flags = 0x0;
- ram_table->min_abm_backlight = (big_endian) ?
+ ram_table->min_abm_backlight = (uint16_t)((big_endian) ?
cpu_to_be16(params.min_abm_backlight) :
- cpu_to_le16(params.min_abm_backlight);
+ cpu_to_le16(params.min_abm_backlight));
for (i = 0; i < NUM_AGGR_LEVEL; i++) {
- ram_table->hybrid_factor[i] = abm_settings[set][i].brightness_gain;
+ ram_table->hybrid_factor[i] = (uint8_t)abm_settings[set][i].brightness_gain;
ram_table->contrast_factor[i] = abm_settings[set][i].contrast_factor;
ram_table->deviation_gain[i] = abm_settings[set][i].deviation_gain;
ram_table->min_knee[i] = abm_settings[set][i].min_knee;
@@ -960,8 +960,8 @@ bool psr_su_set_dsc_slice_height(struct dc *dc, struct dc_link *link,
struct dc_stream_state *stream,
struct psr_config *config)
{
- uint16_t pic_height;
- uint16_t slice_height;
+ uint32_t pic_height;
+ uint32_t slice_height;
config->dsc_slice_height = 0;
if (!(link->connector_signal & SIGNAL_TYPE_EDP) ||
@@ -978,7 +978,7 @@ bool psr_su_set_dsc_slice_height(struct dc *dc, struct dc_link *link,
return false;
slice_height = pic_height / stream->timing.dsc_cfg.num_slices_v;
- config->dsc_slice_height = slice_height;
+ config->dsc_slice_height = (uint16_t)slice_height;
if (slice_height) {
if (config->su_y_granularity &&
@@ -1056,7 +1056,7 @@ void set_replay_low_rr_full_screen_video_src_vtotal(struct dc_link *link, uint16
void calculate_replay_link_off_frame_count(struct dc_link *link,
uint16_t vtotal, uint16_t htotal)
{
- uint8_t max_link_off_frame_count = 0;
+ uint32_t max_link_off_frame_count = 0;
uint16_t max_deviation_line = 0, pixel_deviation_per_line = 0;
if (!link || link->replay_settings.config.replay_version != DC_FREESYNC_REPLAY)
@@ -1093,7 +1093,7 @@ bool fill_custom_backlight_caps(unsigned int config_no, struct dm_acpi_atif_back
caps->dc_level_percentage = custom_backlight_profiles[config_no].dc_level_percentage;
caps->min_input_signal = custom_backlight_profiles[config_no].min_input_signal;
caps->max_input_signal = custom_backlight_profiles[config_no].max_input_signal;
- caps->num_data_points = custom_backlight_profiles[config_no].num_data_points;
+ caps->num_data_points = (uint8_t)custom_backlight_profiles[config_no].num_data_points;
memcpy(caps->data_points, custom_backlight_profiles[config_no].data_points, data_points_size);
return true;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 15/19] drm/amd/display: Fix dml2_0 narrowing boundaries
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (13 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 14/19] drm/amd/display: Fix narrowing boundaries and eDP parser assignment Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 16/19] drm/amd/display: Add README.md file to DML2_0 repository Chenyu Chen
` (4 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Gaghik Khachatrian, Dillon Varone,
Chenyu Chen
From: Gaghik Khachatrian <gaghik.khachatrian@amd.com>
[Why] drm/amd/display dml2_0 code had implicit narrowing conversions reported by
a warning in timing, watermark, and translation paths.
[How] Apply explicit boundary casts for intentional narrowing, preserve wider
intermediate math, and use wider timing intermediates where required for safe
range handling.
Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Gaghik Khachatrian <gaghik.khachatrian@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../amd/display/dc/dml2_0/display_mode_core.c | 14 +--
.../amd/display/dc/dml2_0/display_mode_util.c | 20 ++--
.../dml2_0/dml21/dml21_translation_helper.c | 8 +-
.../amd/display/dc/dml2_0/dml21/dml21_utils.c | 2 +-
.../amd/display/dc/dml2_0/dml2_mall_phantom.c | 100 +++++++++---------
.../drm/amd/display/dc/dml2_0/dml2_policy.c | 6 +-
.../dc/dml2_0/dml2_translation_helper.c | 4 +-
.../drm/amd/display/dc/dml2_0/dml2_utils.c | 40 +++----
8 files changed, 97 insertions(+), 97 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_core.c
index 698d62fb9cf7..16514f1e4ed9 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_core.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_core.c
@@ -1119,10 +1119,10 @@ static dml_bool_t CalculatePrefetchSchedule(struct display_mode_lib_scratch_st *
if (p->myPipe->Dppclk == 0.0 || p->myPipe->Dispclk == 0.0)
return true;
- *p->DSTXAfterScaler = (dml_uint_t) dml_round(s->DPPCycles * p->myPipe->PixelClock / p->myPipe->Dppclk + s->DISPCLKCycles * p->myPipe->PixelClock / p->myPipe->Dispclk + p->DSCDelay, 1.0);
+ *p->DSTXAfterScaler = (dml_uint_t) dml_round(s->DPPCycles * p->myPipe->PixelClock / p->myPipe->Dppclk + s->DISPCLKCycles * p->myPipe->PixelClock / p->myPipe->Dispclk + p->DSCDelay, true);
*p->DSTXAfterScaler = (dml_uint_t) dml_round(*p->DSTXAfterScaler + (p->myPipe->ODMMode != dml_odm_mode_bypass ? 18 : 0) + (p->myPipe->DPPPerSurface - 1) * p->DPP_RECOUT_WIDTH +
((p->myPipe->ODMMode == dml_odm_mode_split_1to2 || p->myPipe->ODMMode == dml_odm_mode_mso_1to2) ? (dml_float_t)p->myPipe->HActive / 2.0 : 0) +
- ((p->myPipe->ODMMode == dml_odm_mode_mso_1to4) ? (dml_float_t)p->myPipe->HActive * 3.0 / 4.0 : 0), 1.0);
+ ((p->myPipe->ODMMode == dml_odm_mode_mso_1to4) ? (dml_float_t)p->myPipe->HActive * 3.0 / 4.0 : 0), true);
#ifdef __DML_VBA_DEBUG__
dml_print("DML::%s: DPPCycles = %u\n", __func__, s->DPPCycles);
@@ -4301,7 +4301,7 @@ static void CalculateSwathAndDETConfiguration(struct display_mode_lib_scratch_st
*p->compbuf_reserved_space_64b = 2 * p->PixelChunkSizeInKByte * 1024 / 64;
if (*p->UnboundedRequestEnabled) {
- *p->compbuf_reserved_space_64b = dml_max(*p->compbuf_reserved_space_64b,
+ *p->compbuf_reserved_space_64b = (dml_uint_t)dml_max(*p->compbuf_reserved_space_64b,
(dml_float_t)(p->ROBBufferSizeInKByte * 1024/64)
- (dml_float_t)(RoundedUpSwathSizeBytesY[SurfaceDoingUnboundedRequest] * TTUFIFODEPTH / MAXIMUMCOMPRESSION/64));
}
@@ -6178,9 +6178,9 @@ static void CalculateImmediateFlipBandwithSupport(
static dml_uint_t MicroSecToVertLines(dml_uint_t num_us, dml_uint_t h_total, dml_float_t pixel_clock)
{
- dml_uint_t lines_time_in_ns = 1000.0 * (h_total * 1000.0) / (pixel_clock * 1000.0);
+ dml_uint_t lines_time_in_ns = (dml_uint_t)(1000.0 * (h_total * 1000.0) / (pixel_clock * 1000.0));
- return dml_ceil(1000.0 * num_us / lines_time_in_ns, 1.0);
+ return (dml_uint_t)dml_ceil(1000.0 * num_us / lines_time_in_ns, 1.0);
}
/// @brief Calculate the maximum vstartup for mode support and mode programming consideration
@@ -6207,9 +6207,9 @@ static dml_uint_t CalculateMaxVStartup(
// + 2 is because
// 1 -> VStartup_start should be 1 line before VSync
// 1 -> always reserve 1 line between start of VBlank to VStartup signal
- dml_uint_t vblank_nom_vsync_capped = dml_max(vblank_nom_input,
+ dml_uint_t vblank_nom_vsync_capped = (dml_uint_t)dml_max(vblank_nom_input,
timing->VTotal[plane_idx] - timing->VActive[plane_idx] - timing->VFrontPorch[plane_idx] + 2);
- dml_uint_t vblank_nom_max_allowed_capped = dml_min(vblank_nom_vsync_capped, max_allowed_vblank_nom);
+ dml_uint_t vblank_nom_max_allowed_capped = (dml_uint_t)dml_min(vblank_nom_vsync_capped, max_allowed_vblank_nom);
dml_uint_t vblank_avail = (vblank_nom_max_allowed_capped == 0) ?
vblank_nom_default_in_line : vblank_nom_max_allowed_capped;
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_util.c b/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_util.c
index b2fada6c44c3..3939a0d8b835 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_util.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/display_mode_util.c
@@ -80,7 +80,7 @@ static inline float dcn_bw_pow(float a, float exp)
/*ASSERT(exp == (int)exp);*/
if ((int)exp == 0)
return 1;
- temp = dcn_bw_pow(a, (int)(exp / 2));
+ temp = dcn_bw_pow(a, (float)(exp / 2));
if (((int)exp % 2) == 0) {
return temp * temp;
} else {
@@ -110,7 +110,7 @@ dml_float_t dml_ceil(dml_float_t x, dml_float_t granularity)
if (granularity == 0)
return 0;
//return (dml_float_t) (ceil(x / granularity) * granularity);
- return (dml_float_t)dcn_bw_ceil2(x, granularity);
+ return (dml_float_t)dcn_bw_ceil2((float)x, (float)granularity);
}
dml_float_t dml_floor(dml_float_t x, dml_float_t granularity)
@@ -118,7 +118,7 @@ dml_float_t dml_floor(dml_float_t x, dml_float_t granularity)
if (granularity == 0)
return 0;
//return (dml_float_t) (floor(x / granularity) * granularity);
- return (dml_float_t)dcn_bw_floor2(x, granularity);
+ return (dml_float_t)dcn_bw_floor2((float)x, (float)granularity);
}
dml_float_t dml_min(dml_float_t x, dml_float_t y)
@@ -168,12 +168,12 @@ dml_float_t dml_max5(dml_float_t a, dml_float_t b, dml_float_t c, dml_float_t d,
}
dml_float_t dml_log(dml_float_t x, dml_float_t base)
{
- return (dml_float_t) (_log(x) / _log(base));
+ return (dml_float_t) (_log((float)x) / _log((float)base));
}
dml_float_t dml_log2(dml_float_t x)
{
- return (dml_float_t) (_log(x) / _log(2));
+ return (dml_float_t) (_log((float)x) / _log(2.0f));
}
dml_float_t dml_round(dml_float_t val, dml_bool_t bankers_rounding)
@@ -184,19 +184,19 @@ dml_float_t dml_round(dml_float_t val, dml_bool_t bankers_rounding)
// else {
// return round(val);
double round_pt = 0.5;
- double ceil = dml_ceil(val, 1);
- double floor = dml_floor(val, 1);
+ double ceil = dml_ceil(val, 1.0);
+ double floor = dml_floor(val, 1.0);
if (val - floor >= round_pt)
- return ceil;
+ return (dml_float_t)ceil;
else
- return floor;
+ return (dml_float_t)floor;
// }
}
dml_float_t dml_pow(dml_float_t base, int exp)
{
- return (dml_float_t) dcn_bw_pow(base, exp);
+ return (dml_float_t) dcn_bw_pow((float)base, (float)exp);
}
dml_uint_t dml_round_to_multiple(dml_uint_t num, dml_uint_t multiple, dml_bool_t up)
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c
index 9031fd582ec7..d89fd876975e 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_translation_helper.c
@@ -851,10 +851,10 @@ void dml21_copy_clocks_to_dc_state(struct dml2_context *in_ctx, struct dc_state
context->bw_ctx.bw.dcn.clk.socclk_khz = in_ctx->v21.mode_programming.programming->min_clocks.dcn4x.socclk_khz;
context->bw_ctx.bw.dcn.clk.subvp_prefetch_dramclk_khz = in_ctx->v21.mode_programming.programming->min_clocks.dcn4x.svp_prefetch_no_throttle.uclk_khz;
context->bw_ctx.bw.dcn.clk.subvp_prefetch_fclk_khz = in_ctx->v21.mode_programming.programming->min_clocks.dcn4x.svp_prefetch_no_throttle.fclk_khz;
- context->bw_ctx.bw.dcn.clk.stutter_efficiency.base_efficiency = in_ctx->v21.mode_programming.programming->stutter.base_percent_efficiency;
- context->bw_ctx.bw.dcn.clk.stutter_efficiency.low_power_efficiency = in_ctx->v21.mode_programming.programming->stutter.low_power_percent_efficiency;
- context->bw_ctx.bw.dcn.clk.stutter_efficiency.z8_stutter_efficiency = in_ctx->v21.mode_programming.programming->informative.power_management.z8.stutter_efficiency;
- context->bw_ctx.bw.dcn.clk.stutter_efficiency.z8_stutter_period = in_ctx->v21.mode_programming.programming->informative.power_management.z8.stutter_period;
+ context->bw_ctx.bw.dcn.clk.stutter_efficiency.base_efficiency = (uint8_t)in_ctx->v21.mode_programming.programming->stutter.base_percent_efficiency;
+ context->bw_ctx.bw.dcn.clk.stutter_efficiency.low_power_efficiency = (uint8_t)in_ctx->v21.mode_programming.programming->stutter.low_power_percent_efficiency;
+ context->bw_ctx.bw.dcn.clk.stutter_efficiency.z8_stutter_efficiency = (uint8_t)in_ctx->v21.mode_programming.programming->informative.power_management.z8.stutter_efficiency;
+ context->bw_ctx.bw.dcn.clk.stutter_efficiency.z8_stutter_period = (int)in_ctx->v21.mode_programming.programming->informative.power_management.z8.stutter_period;
context->bw_ctx.bw.dcn.clk.zstate_support = in_ctx->v21.mode_programming.programming->z8_stutter.supported_in_blank; /*ignore meets_eco since it is not used*/
}
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_utils.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_utils.c
index 732de97335fa..835fece1d46a 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_utils.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_utils.c
@@ -267,7 +267,7 @@ static struct dc_stream_state *dml21_add_phantom_stream(struct dml2_context *dml
phantom_stream->dst.height = stream_programming->phantom_stream.descriptor.timing.v_active;
phantom_stream->src.y = 0;
- phantom_stream->src.height = (double)phantom_stream_descriptor->timing.v_active * (double)main_stream->src.height / (double)main_stream->dst.height;
+ phantom_stream->src.height = (int)((double)phantom_stream_descriptor->timing.v_active * (double)main_stream->src.height / (double)main_stream->dst.height);
phantom_stream->use_dynamic_meta = false;
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_mall_phantom.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_mall_phantom.c
index 9bbe4e058be7..fe667aea6ec8 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_mall_phantom.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_mall_phantom.c
@@ -244,9 +244,9 @@ static bool assign_subvp_pipe(struct dml2_context *ctx, struct dc_state *context
continue;
// Round up
- refresh_rate = (pipe->stream->timing.pix_clk_100hz * 100 +
+ refresh_rate = (unsigned int)((pipe->stream->timing.pix_clk_100hz * 100 +
pipe->stream->timing.v_total * pipe->stream->timing.h_total - 1)
- / (double)(pipe->stream->timing.v_total * pipe->stream->timing.h_total);
+ / (double)(pipe->stream->timing.v_total * pipe->stream->timing.h_total));
/* SubVP pipe candidate requirements:
* - Refresh rate < 120hz
* - Not able to switch in vactive naturally (switching in active means the
@@ -264,8 +264,8 @@ static bool assign_subvp_pipe(struct dml2_context *ctx, struct dc_state *context
pipe = &context->res_ctx.pipe_ctx[i];
if (num_pipes <= free_pipes) {
struct dc_stream_state *stream = pipe->stream;
- unsigned int frame_us = (stream->timing.v_total * stream->timing.h_total /
- (double)(stream->timing.pix_clk_100hz * 100)) * 1000000;
+ unsigned int frame_us = (unsigned int)((stream->timing.v_total * stream->timing.h_total /
+ (double)(stream->timing.pix_clk_100hz * 100)) * 1000000);
if (frame_us > max_frame_time && !stream->ignore_msa_timing_param) {
*index = i;
max_frame_time = frame_us;
@@ -382,8 +382,8 @@ static bool subvp_subvp_schedulable(struct dml2_context *ctx, struct dc_state *c
phantom->timing.v_addressable;
// Round up when calculating microschedule time (+ 1 at the end)
- time_us = (microschedule_lines * phantom->timing.h_total) /
- (double)(phantom->timing.pix_clk_100hz * 100) * 1000000 +
+ time_us = (uint32_t)((microschedule_lines * phantom->timing.h_total) /
+ (double)(phantom->timing.pix_clk_100hz * 100) * 1000000) +
ctx->config.svp_pstate.subvp_prefetch_end_to_mall_start_us +
ctx->config.svp_pstate.subvp_fw_processing_delay_us + 1;
if (time_us > max_microschedule_us)
@@ -402,16 +402,16 @@ static bool subvp_subvp_schedulable(struct dml2_context *ctx, struct dc_state *c
if (index < 2 || !subvp_pipes[0] || !subvp_pipes[1])
return false;
- vactive1_us = ((subvp_pipes[0]->stream->timing.v_addressable * subvp_pipes[0]->stream->timing.h_total) /
- (double)(subvp_pipes[0]->stream->timing.pix_clk_100hz * 100)) * 1000000;
- vactive2_us = ((subvp_pipes[1]->stream->timing.v_addressable * subvp_pipes[1]->stream->timing.h_total) /
- (double)(subvp_pipes[1]->stream->timing.pix_clk_100hz * 100)) * 1000000;
- vblank1_us = (((subvp_pipes[0]->stream->timing.v_total - subvp_pipes[0]->stream->timing.v_addressable) *
+ vactive1_us = (int32_t)(((subvp_pipes[0]->stream->timing.v_addressable * subvp_pipes[0]->stream->timing.h_total) /
+ (double)(subvp_pipes[0]->stream->timing.pix_clk_100hz * 100)) * 1000000);
+ vactive2_us = (int32_t)(((subvp_pipes[1]->stream->timing.v_addressable * subvp_pipes[1]->stream->timing.h_total) /
+ (double)(subvp_pipes[1]->stream->timing.pix_clk_100hz * 100)) * 1000000);
+ vblank1_us = (int32_t)(((subvp_pipes[0]->stream->timing.v_total - subvp_pipes[0]->stream->timing.v_addressable) *
subvp_pipes[0]->stream->timing.h_total) /
- (double)(subvp_pipes[0]->stream->timing.pix_clk_100hz * 100)) * 1000000;
- vblank2_us = (((subvp_pipes[1]->stream->timing.v_total - subvp_pipes[1]->stream->timing.v_addressable) *
+ (double)(subvp_pipes[0]->stream->timing.pix_clk_100hz * 100) * 1000000);
+ vblank2_us = (int32_t)(((subvp_pipes[1]->stream->timing.v_total - subvp_pipes[1]->stream->timing.v_addressable) *
subvp_pipes[1]->stream->timing.h_total) /
- (double)(subvp_pipes[1]->stream->timing.pix_clk_100hz * 100)) * 1000000;
+ (double)(subvp_pipes[1]->stream->timing.pix_clk_100hz * 100) * 1000000);
if ((vactive1_us - vblank2_us) / 2 > max_microschedule_us &&
(vactive2_us - vblank1_us) / 2 > max_microschedule_us)
@@ -445,13 +445,13 @@ bool dml2_svp_drr_schedulable(struct dml2_context *ctx, struct dc_state *context
struct dc_crtc_timing *main_timing = NULL;
struct dc_crtc_timing *phantom_timing = NULL;
struct dc_stream_state *phantom_stream;
- int16_t prefetch_us = 0;
- int16_t mall_region_us = 0;
- int16_t drr_frame_us = 0; // nominal frame time
- int16_t subvp_active_us = 0;
- int16_t stretched_drr_us = 0;
- int16_t drr_stretched_vblank_us = 0;
- int16_t max_vblank_mallregion = 0;
+ int32_t prefetch_us = 0;
+ int32_t mall_region_us = 0;
+ int32_t drr_frame_us = 0; // nominal frame time
+ int32_t subvp_active_us = 0;
+ int32_t stretched_drr_us = 0;
+ int32_t drr_stretched_vblank_us = 0;
+ int32_t max_vblank_mallregion = 0;
// Find SubVP pipe
for (i = 0; i < ctx->config.dcn_pipe_count; i++) {
@@ -475,19 +475,19 @@ bool dml2_svp_drr_schedulable(struct dml2_context *ctx, struct dc_state *context
phantom_stream = ctx->config.svp_pstate.callbacks.get_paired_subvp_stream(context, pipe->stream);
main_timing = &pipe->stream->timing;
phantom_timing = &phantom_stream->timing;
- prefetch_us = (phantom_timing->v_total - phantom_timing->v_front_porch) * phantom_timing->h_total /
+ prefetch_us = (int32_t)((phantom_timing->v_total - phantom_timing->v_front_porch) * phantom_timing->h_total /
(double)(phantom_timing->pix_clk_100hz * 100) * 1000000 +
- ctx->config.svp_pstate.subvp_prefetch_end_to_mall_start_us;
- subvp_active_us = main_timing->v_addressable * main_timing->h_total /
- (double)(main_timing->pix_clk_100hz * 100) * 1000000;
- drr_frame_us = drr_timing->v_total * drr_timing->h_total /
- (double)(drr_timing->pix_clk_100hz * 100) * 1000000;
+ ctx->config.svp_pstate.subvp_prefetch_end_to_mall_start_us);
+ subvp_active_us = (int32_t)(main_timing->v_addressable * main_timing->h_total /
+ (double)(main_timing->pix_clk_100hz * 100) * 1000000);
+ drr_frame_us = (int32_t)(drr_timing->v_total * drr_timing->h_total /
+ (double)(drr_timing->pix_clk_100hz * 100) * 1000000);
// P-State allow width and FW delays already included phantom_timing->v_addressable
- mall_region_us = phantom_timing->v_addressable * phantom_timing->h_total /
- (double)(phantom_timing->pix_clk_100hz * 100) * 1000000;
+ mall_region_us = (int32_t)(phantom_timing->v_addressable * phantom_timing->h_total /
+ (double)(phantom_timing->pix_clk_100hz * 100) * 1000000);
stretched_drr_us = drr_frame_us + mall_region_us + SUBVP_DRR_MARGIN_US;
- drr_stretched_vblank_us = (drr_timing->v_total - drr_timing->v_addressable) * drr_timing->h_total /
- (double)(drr_timing->pix_clk_100hz * 100) * 1000000 + (stretched_drr_us - drr_frame_us);
+ drr_stretched_vblank_us = (int32_t)((drr_timing->v_total - drr_timing->v_addressable) * drr_timing->h_total /
+ (double)(drr_timing->pix_clk_100hz * 100) * 1000000 + (stretched_drr_us - drr_frame_us));
max_vblank_mallregion = drr_stretched_vblank_us > mall_region_us ? drr_stretched_vblank_us : mall_region_us;
/* We consider SubVP + DRR schedulable if the stretched frame duration of the DRR display (i.e. the
@@ -526,12 +526,12 @@ static bool subvp_vblank_schedulable(struct dml2_context *ctx, struct dc_state *
bool schedulable = false;
uint32_t i = 0;
uint8_t vblank_index = 0;
- uint16_t prefetch_us = 0;
- uint16_t mall_region_us = 0;
- uint16_t vblank_frame_us = 0;
- uint16_t subvp_active_us = 0;
- uint16_t vblank_blank_us = 0;
- uint16_t max_vblank_mallregion = 0;
+ uint32_t prefetch_us = 0;
+ uint32_t mall_region_us = 0;
+ uint32_t vblank_frame_us = 0;
+ uint32_t subvp_active_us = 0;
+ uint32_t vblank_blank_us = 0;
+ uint32_t max_vblank_mallregion = 0;
struct dc_crtc_timing *main_timing = NULL;
struct dc_crtc_timing *phantom_timing = NULL;
struct dc_crtc_timing *vblank_timing = NULL;
@@ -581,18 +581,18 @@ static bool subvp_vblank_schedulable(struct dml2_context *ctx, struct dc_state *
vblank_timing = &context->res_ctx.pipe_ctx[vblank_index].stream->timing;
// Prefetch time is equal to VACTIVE + BP + VSYNC of the phantom pipe
// Also include the prefetch end to mallstart delay time
- prefetch_us = (phantom_timing->v_total - phantom_timing->v_front_porch) * phantom_timing->h_total /
+ prefetch_us = (uint32_t)((phantom_timing->v_total - phantom_timing->v_front_porch) * phantom_timing->h_total /
(double)(phantom_timing->pix_clk_100hz * 100) * 1000000 +
- ctx->config.svp_pstate.subvp_prefetch_end_to_mall_start_us;
+ ctx->config.svp_pstate.subvp_prefetch_end_to_mall_start_us);
// P-State allow width and FW delays already included phantom_timing->v_addressable
- mall_region_us = phantom_timing->v_addressable * phantom_timing->h_total /
- (double)(phantom_timing->pix_clk_100hz * 100) * 1000000;
- vblank_frame_us = vblank_timing->v_total * vblank_timing->h_total /
- (double)(vblank_timing->pix_clk_100hz * 100) * 1000000;
- vblank_blank_us = (vblank_timing->v_total - vblank_timing->v_addressable) * vblank_timing->h_total /
- (double)(vblank_timing->pix_clk_100hz * 100) * 1000000;
- subvp_active_us = main_timing->v_addressable * main_timing->h_total /
- (double)(main_timing->pix_clk_100hz * 100) * 1000000;
+ mall_region_us = (uint32_t)(phantom_timing->v_addressable * phantom_timing->h_total /
+ (double)(phantom_timing->pix_clk_100hz * 100) * 1000000);
+ vblank_frame_us = (uint32_t)(vblank_timing->v_total * vblank_timing->h_total /
+ (double)(vblank_timing->pix_clk_100hz * 100) * 1000000);
+ vblank_blank_us = (uint32_t)((vblank_timing->v_total - vblank_timing->v_addressable) * vblank_timing->h_total /
+ (double)(vblank_timing->pix_clk_100hz * 100) * 1000000);
+ subvp_active_us = (uint32_t)(main_timing->v_addressable * main_timing->h_total /
+ (double)(main_timing->pix_clk_100hz * 100) * 1000000);
max_vblank_mallregion = vblank_blank_us > mall_region_us ? vblank_blank_us : mall_region_us;
// Schedulable if VACTIVE region of the SubVP pipe can fit the MALL prefetch, VBLANK frame time,
@@ -694,10 +694,10 @@ static void set_phantom_stream_timing(struct dml2_context *ctx, struct dc_state
}
// Calculate lines required for pstate allow width and FW processing delays
- pstate_width_fw_delay_lines = ((double)(ctx->config.svp_pstate.subvp_fw_processing_delay_us +
+ pstate_width_fw_delay_lines = (uint32_t)(((double)(ctx->config.svp_pstate.subvp_fw_processing_delay_us +
ctx->config.svp_pstate.subvp_pstate_allow_width_us) / 1000000) *
(ref_pipe->stream->timing.pix_clk_100hz * 100) /
- (double)ref_pipe->stream->timing.h_total;
+ (double)ref_pipe->stream->timing.h_total);
// DML calculation for MALL region doesn't take into account FW delay
// and required pstate allow width for multi-display cases
@@ -712,7 +712,7 @@ static void set_phantom_stream_timing(struct dml2_context *ctx, struct dc_state
fp_and_sync_width_time = (phantom_stream->timing.v_front_porch + phantom_stream->timing.v_sync_width) * line_time;
if ((svp_vstartup * line_time) + fp_and_sync_width_time > cvt_rb_vblank_max) {
- svp_vstartup = (cvt_rb_vblank_max - fp_and_sync_width_time) / line_time;
+ svp_vstartup = (unsigned int)((cvt_rb_vblank_max - fp_and_sync_width_time) / line_time);
}
// For backporch of phantom pipe, use vstartup of the main pipe
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_policy.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_policy.c
index ef693f608d59..ab2964811c5b 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_policy.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_policy.c
@@ -123,9 +123,9 @@ int dml2_policy_build_synthetic_soc_states(struct dml2_policy_build_synthetic_so
struct dml2_policy_build_synthetic_soc_states_params *p)
{
int i, j;
- unsigned int min_fclk_mhz = p->in_states->state_array[0].fabricclk_mhz;
- unsigned int min_dcfclk_mhz = p->in_states->state_array[0].dcfclk_mhz;
- unsigned int min_socclk_mhz = p->in_states->state_array[0].socclk_mhz;
+ unsigned int min_fclk_mhz = (unsigned int)p->in_states->state_array[0].fabricclk_mhz;
+ unsigned int min_dcfclk_mhz = (unsigned int)p->in_states->state_array[0].dcfclk_mhz;
+ unsigned int min_socclk_mhz = (unsigned int)p->in_states->state_array[0].socclk_mhz;
int max_dcfclk_mhz = 0, max_dispclk_mhz = 0, max_dppclk_mhz = 0,
max_phyclk_mhz = 0, max_dtbclk_mhz = 0, max_fclk_mhz = 0,
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_translation_helper.c
index 8e0997441ee0..0d8ff236c6d0 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_translation_helper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_translation_helper.c
@@ -508,11 +508,11 @@ void dml2_init_soc_states(struct dml2_context *dml2, const struct dc *in_dc,
/* DCFCLK stas values are project specific */
if ((dml2->v20.dml_core_ctx.project == dml_project_dcn32) ||
(dml2->v20.dml_core_ctx.project == dml_project_dcn321)) {
- p->dcfclk_stas_mhz[0] = p->in_states->state_array[0].dcfclk_mhz;
+ p->dcfclk_stas_mhz[0] = (int)p->in_states->state_array[0].dcfclk_mhz;
p->dcfclk_stas_mhz[1] = 615;
p->dcfclk_stas_mhz[2] = 906;
p->dcfclk_stas_mhz[3] = 1324;
- p->dcfclk_stas_mhz[4] = p->in_states->state_array[1].dcfclk_mhz;
+ p->dcfclk_stas_mhz[4] = (int)p->in_states->state_array[1].dcfclk_mhz;
} else if (dml2->v20.dml_core_ctx.project != dml_project_dcn35 &&
dml2->v20.dml_core_ctx.project != dml_project_dcn36 &&
dml2->v20.dml_core_ctx.project != dml_project_dcn351) {
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_utils.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_utils.c
index 86567e232415..1bc81e26a11f 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_utils.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_utils.c
@@ -333,7 +333,7 @@ void dml2_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_state *cont
}
context->bw_ctx.bw.dcn.compbuf_size_kb -= context->res_ctx.pipe_ctx[dc_pipe_ctx_index].det_buffer_size_kb;
- context->res_ctx.pipe_ctx[dc_pipe_ctx_index].plane_res.bw.dppclk_khz = dml_get_dppclk_calculated(&context->bw_ctx.dml2->v20.dml_core_ctx, dml_pipe_idx) * 1000;
+ context->res_ctx.pipe_ctx[dc_pipe_ctx_index].plane_res.bw.dppclk_khz = (int)(dml_get_dppclk_calculated(&context->bw_ctx.dml2->v20.dml_core_ctx, dml_pipe_idx) * 1000);
if (context->bw_ctx.bw.dcn.clk.dppclk_khz < context->res_ctx.pipe_ctx[dc_pipe_ctx_index].plane_res.bw.dppclk_khz)
context->bw_ctx.bw.dcn.clk.dppclk_khz = context->res_ctx.pipe_ctx[dc_pipe_ctx_index].plane_res.bw.dppclk_khz;
@@ -362,10 +362,10 @@ void dml2_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_state *cont
context->bw_ctx.bw.dcn.clk.bw_dppclk_khz = context->bw_ctx.bw.dcn.clk.dppclk_khz;
context->bw_ctx.bw.dcn.clk.bw_dispclk_khz = context->bw_ctx.bw.dcn.clk.dispclk_khz;
- context->bw_ctx.bw.dcn.clk.max_supported_dppclk_khz = in_ctx->v20.dml_core_ctx.states.state_array[in_ctx->v20.scratch.mode_support_params.out_lowest_state_idx].dppclk_mhz
- * 1000;
- context->bw_ctx.bw.dcn.clk.max_supported_dispclk_khz = in_ctx->v20.dml_core_ctx.states.state_array[in_ctx->v20.scratch.mode_support_params.out_lowest_state_idx].dispclk_mhz
- * 1000;
+ context->bw_ctx.bw.dcn.clk.max_supported_dppclk_khz = (int)(in_ctx->v20.dml_core_ctx.states.state_array[in_ctx->v20.scratch.mode_support_params.out_lowest_state_idx].dppclk_mhz
+ * 1000);
+ context->bw_ctx.bw.dcn.clk.max_supported_dispclk_khz = (int)(in_ctx->v20.dml_core_ctx.states.state_array[in_ctx->v20.scratch.mode_support_params.out_lowest_state_idx].dispclk_mhz
+ * 1000);
if (dc->config.forced_clocks || dc->debug.max_disp_clk) {
context->bw_ctx.bw.dcn.clk.bw_dispclk_khz = context->bw_ctx.bw.dcn.clk.max_supported_dispclk_khz;
@@ -375,18 +375,18 @@ void dml2_calculate_rq_and_dlg_params(const struct dc *dc, struct dc_state *cont
void dml2_extract_watermark_set(struct dcn_watermarks *watermark, struct display_mode_lib_st *dml_core_ctx)
{
- watermark->urgent_ns = dml_get_wm_urgent(dml_core_ctx) * 1000;
- watermark->cstate_pstate.cstate_enter_plus_exit_ns = dml_get_wm_stutter_enter_exit(dml_core_ctx) * 1000;
- watermark->cstate_pstate.cstate_exit_ns = dml_get_wm_stutter_exit(dml_core_ctx) * 1000;
- watermark->cstate_pstate.pstate_change_ns = dml_get_wm_dram_clock_change(dml_core_ctx) * 1000;
- watermark->pte_meta_urgent_ns = dml_get_wm_memory_trip(dml_core_ctx) * 1000;
- watermark->frac_urg_bw_nom = dml_get_fraction_of_urgent_bandwidth(dml_core_ctx) * 1000;
- watermark->frac_urg_bw_flip = dml_get_fraction_of_urgent_bandwidth_imm_flip(dml_core_ctx) * 1000;
- watermark->urgent_latency_ns = dml_get_urgent_latency(dml_core_ctx) * 1000;
- watermark->cstate_pstate.fclk_pstate_change_ns = dml_get_wm_fclk_change(dml_core_ctx) * 1000;
- watermark->usr_retraining_ns = dml_get_wm_usr_retraining(dml_core_ctx) * 1000;
- watermark->cstate_pstate.cstate_enter_plus_exit_z8_ns = dml_get_wm_z8_stutter_enter_exit(dml_core_ctx) * 1000;
- watermark->cstate_pstate.cstate_exit_z8_ns = dml_get_wm_z8_stutter(dml_core_ctx) * 1000;
+ watermark->urgent_ns = (uint32_t)(dml_get_wm_urgent(dml_core_ctx) * 1000);
+ watermark->cstate_pstate.cstate_enter_plus_exit_ns = (uint32_t)(dml_get_wm_stutter_enter_exit(dml_core_ctx) * 1000);
+ watermark->cstate_pstate.cstate_exit_ns = (uint32_t)(dml_get_wm_stutter_exit(dml_core_ctx) * 1000);
+ watermark->cstate_pstate.pstate_change_ns = (uint32_t)(dml_get_wm_dram_clock_change(dml_core_ctx) * 1000);
+ watermark->pte_meta_urgent_ns = (uint32_t)(dml_get_wm_memory_trip(dml_core_ctx) * 1000);
+ watermark->frac_urg_bw_nom = (uint32_t)(dml_get_fraction_of_urgent_bandwidth(dml_core_ctx) * 1000);
+ watermark->frac_urg_bw_flip = (uint32_t)(dml_get_fraction_of_urgent_bandwidth_imm_flip(dml_core_ctx) * 1000);
+ watermark->urgent_latency_ns = (uint32_t)(dml_get_urgent_latency(dml_core_ctx) * 1000);
+ watermark->cstate_pstate.fclk_pstate_change_ns = (uint32_t)(dml_get_wm_fclk_change(dml_core_ctx) * 1000);
+ watermark->usr_retraining_ns = (uint32_t)(dml_get_wm_usr_retraining(dml_core_ctx) * 1000);
+ watermark->cstate_pstate.cstate_enter_plus_exit_z8_ns = (uint32_t)(dml_get_wm_z8_stutter_enter_exit(dml_core_ctx) * 1000);
+ watermark->cstate_pstate.cstate_exit_z8_ns = (uint32_t)(dml_get_wm_z8_stutter(dml_core_ctx) * 1000);
}
unsigned int dml2_calc_max_scaled_time(
@@ -434,9 +434,9 @@ void dml2_extract_writeback_wm(struct dc_state *context, struct display_mode_lib
for (j = 0 ; j < 4; j++) {
/*current dml only has one set of watermark, need to follow up*/
bw_writeback->mcif_wb_arb[i].cli_watermark[j] =
- dml_get_wm_writeback_urgent(dml_core_ctx) * 1000;
+ (unsigned int)(dml_get_wm_writeback_urgent(dml_core_ctx) * 1000);
bw_writeback->mcif_wb_arb[i].pstate_watermark[j] =
- dml_get_wm_writeback_dram_clock_change(dml_core_ctx) * 1000;
+ (unsigned int)(dml_get_wm_writeback_dram_clock_change(dml_core_ctx) * 1000);
}
if (context->res_ctx.pipe_ctx[i].stream->phy_pix_clk != 0) {
/* time_per_pixel should be in u6.6 format */
@@ -450,7 +450,7 @@ void dml2_extract_writeback_wm(struct dc_state *context, struct display_mode_lib
wbif_mode, wb_arb_params->cli_watermark[0]);
/*not required any more*/
bw_writeback->mcif_wb_arb[i].dram_speed_change_duration =
- dml_get_wm_writeback_dram_clock_change(dml_core_ctx) * 1000;
+ (unsigned int)(dml_get_wm_writeback_dram_clock_change(dml_core_ctx) * 1000);
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 16/19] drm/amd/display: Add README.md file to DML2_0 repository
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (14 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 15/19] drm/amd/display: Fix dml2_0 narrowing boundaries Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 17/19] drm/amd/display: Fix DPMS using partially updated pipe context Chenyu Chen
` (3 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Samson Tam, Dillon Varone, Chenyu Chen
From: Samson Tam <samson.tam@amd.com>
[Why/How]
Add README.md file to repository
Use it to categorize directories for tracking purposes
Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Samson Tam <samson.tam@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../gpu/drm/amd/display/dc/dml2_0/README.md | 31 +++++++++++++++++++
1 file changed, 31 insertions(+)
create mode 100644 drivers/gpu/drm/amd/display/dc/dml2_0/README.md
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/README.md b/drivers/gpu/drm/amd/display/dc/dml2_0/README.md
new file mode 100644
index 000000000000..9e8814fbe52f
--- /dev/null
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/README.md
@@ -0,0 +1,31 @@
+# DML2_0 (Display Mode Library 2.0) repository
+
+## Category to Directory Mapping
+
+```yaml
+directory_categories:
+ app_tools:
+ - dml21/build/
+ - dml21/src/dml2_unit_test/
+ - utils/
+
+ driver_hw_dependent:
+ - dml21/inc/bounding_boxes/
+ - dml21/src/dml2_cga/
+ - dml21/src/dml2_core/
+ - dml21/src/dml2_dpmm/
+ - dml21/src/dml2_mcg/
+ - dml21/src/dml2_pmo/
+ - dml21/src/dml2_standalone_libraries/
+ - dml21/src/dml2_utm_soc_bb/
+
+ driver_hw_independent:
+ - ./
+ - dml21/
+ - dml21/inc/
+ - dml21/src/dml2_top/
+ - dml21/src/inc/
+
+ undefined:
+ - .github/
+```
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 17/19] drm/amd/display: Fix DPMS using partially updated pipe context
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (15 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 16/19] drm/amd/display: Add README.md file to DML2_0 repository Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 18/19] drm/amd/display: Move dml2_destroy to non-FPU compilation unit Chenyu Chen
` (2 subsequent siblings)
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Dominik Kaszewski, Wenjing Liu,
Chenyu Chen
From: Dominik Kaszewski <dominik.kaszewski@amd.com>
[Why & How]
DPMS functions should not use partially updated pipe context passed
as argument of commit_planes_do_stream_update, and instead use the
one in current_state, which is guaranteed to be the most recently
programmed HW config.
Reviewed-by: Wenjing Liu <wenjing.liu@amd.com>
Signed-off-by: Dominik Kaszewski <dominik.kaszewski@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
drivers/gpu/drm/amd/display/dc/core/dc.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
index 9ff5503d4df7..0c7c84276d1f 100644
--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
+++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
@@ -3898,27 +3898,34 @@ static void commit_planes_do_stream_update(struct dc *dc,
resource_build_test_pattern_params(&context->res_ctx, pipe_ctx);
}
+ // DPMS should not use partially updated pipe context
+ struct pipe_ctx *dpms_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[j];
+
if (stream_update->dpms_off) {
if (*stream_update->dpms_off) {
- dc->link_srv->set_dpms_off(pipe_ctx);
+ dc->link_srv->set_dpms_off(dpms_pipe_ctx);
/* for dpms, keep acquired resources*/
- if (pipe_ctx->stream_res.audio && !dc->debug.az_endpoint_mute_only)
- pipe_ctx->stream_res.audio->funcs->az_disable(pipe_ctx->stream_res.audio);
+ if (dpms_pipe_ctx->stream_res.audio && !dc->debug.az_endpoint_mute_only) {
+ struct audio *audio = dpms_pipe_ctx->stream_res.audio;
+
+ audio->funcs->az_disable(audio);
+ }
dc->optimized_required = true;
} else {
if (get_seamless_boot_stream_count(context) == 0)
dc->hwss.prepare_bandwidth(dc, dc->current_state);
- dc->link_srv->set_dpms_on(dc->current_state, pipe_ctx);
+ dc->link_srv->set_dpms_on(dc->current_state, dpms_pipe_ctx);
}
- } else if (pipe_ctx->stream->link->wa_flags.blank_stream_on_ocs_change && stream_update->output_color_space
- && !stream->dpms_off && dc_is_dp_signal(pipe_ctx->stream->signal)) {
+ } else if (dpms_pipe_ctx->stream->link->wa_flags.blank_stream_on_ocs_change &&
+ stream_update->output_color_space &&
+ !stream->dpms_off && dc_is_dp_signal(dpms_pipe_ctx->stream->signal)) {
/*
* Workaround for firmware issue in some receivers where they don't pick up
* correct output color space unless DP link is disabled/re-enabled
*/
- dc->link_srv->set_dpms_on(dc->current_state, pipe_ctx);
+ dc->link_srv->set_dpms_on(dc->current_state, dpms_pipe_ctx);
}
if (stream_update->abm_level && pipe_ctx->stream_res.abm) {
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 18/19] drm/amd/display: Move dml2_destroy to non-FPU compilation unit
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (16 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 17/19] drm/amd/display: Fix DPMS using partially updated pipe context Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-15 7:39 ` [PATCH 19/19] drm/amd/display: Promote DC to 3.2.379 Chenyu Chen
2026-04-20 12:54 ` [PATCH 00/19] DC Patches Apr 20 2026 Wheeler, Daniel
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Rafal Ostrowski, Dillon Varone,
Chenyu Chen
From: Rafal Ostrowski <rafal.ostrowski@amd.com>
On PREEMPT_RT kernels, vfree() can sleep because spin_lock is
converted to rt_mutex. dml2_destroy() calls vfree() while inside
an FPU-guarded region (preempt_count=2), which is illegal.
dml2_wrapper_fpu.c is compiled with CC_FLAGS_FPU which defines
_LINUX_FPU_COMPILATION_UNIT, making DC_RUN_WITH_PREEMPTION_ENABLED()
resolve to a no-op. This prevents the macro from cycling FPU
context off/on around vfree().
Move dml2_destroy() to dml2_wrapper.c (non-FPU compilation unit)
where DC_RUN_WITH_PREEMPTION_ENABLED() properly cycles DC_FP_END/
DC_FP_START around vfree(). This pairs it with dml2_allocate_memory()
which already lives there.
Reviewed-by: Dillon Varone <dillon.varone@amd.com>
Signed-off-by: Rafal Ostrowski <rafal.ostrowski@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
---
.../drm/amd/display/dc/dml2_0/dml21/dml21_wrapper.c | 4 ++--
drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper.c | 11 +++++++++++
.../gpu/drm/amd/display/dc/dml2_0/dml2_wrapper_fpu.c | 10 ----------
3 files changed, 13 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_wrapper.c
index 7398f8b69adb..8bed59e976d1 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_wrapper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml21/dml21_wrapper.c
@@ -58,8 +58,8 @@ bool dml21_create(const struct dc *in_dc, struct dml2_context **dml_ctx, const s
void dml21_destroy(struct dml2_context *dml2)
{
- vfree(dml2->v21.dml_init.dml2_instance);
- vfree(dml2->v21.mode_programming.programming);
+ DC_RUN_WITH_PREEMPTION_ENABLED(vfree(dml2->v21.dml_init.dml2_instance));
+ DC_RUN_WITH_PREEMPTION_ENABLED(vfree(dml2->v21.mode_programming.programming));
}
void dml21_copy(struct dml2_context *dst_dml_ctx,
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper.c
index 93b7613fc4f2..1772e74349c7 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper.c
@@ -108,6 +108,17 @@ bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options
return true;
}
+void dml2_destroy(struct dml2_context *dml2)
+{
+ if (!dml2)
+ return;
+
+ if (dml2->architecture == dml2_architecture_21)
+ dml21_destroy(dml2);
+
+ DC_RUN_WITH_PREEMPTION_ENABLED(vfree(dml2));
+}
+
void dml2_reinit(const struct dc *in_dc,
const struct dml2_configuration_options *config,
struct dml2_context **dml2)
diff --git a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper_fpu.c b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper_fpu.c
index 66624cfc27b1..a14e3004a7b7 100644
--- a/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml2_0/dml2_wrapper_fpu.c
@@ -548,16 +548,6 @@ void dml2_apply_debug_options(const struct dc *dc, struct dml2_context *dml2)
}
}
-void dml2_destroy(struct dml2_context *dml2)
-{
- if (!dml2)
- return;
-
- if (dml2->architecture == dml2_architecture_21)
- dml21_destroy(dml2);
- vfree(dml2);
-}
-
void dml2_extract_dram_and_fclk_change_support(struct dml2_context *dml2,
unsigned int *fclk_change_support, unsigned int *dram_clk_change_support)
{
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* [PATCH 19/19] drm/amd/display: Promote DC to 3.2.379
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (17 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 18/19] drm/amd/display: Move dml2_destroy to non-FPU compilation unit Chenyu Chen
@ 2026-04-15 7:39 ` Chenyu Chen
2026-04-20 12:54 ` [PATCH 00/19] DC Patches Apr 20 2026 Wheeler, Daniel
19 siblings, 0 replies; 22+ messages in thread
From: Chenyu Chen @ 2026-04-15 7:39 UTC (permalink / raw)
To: amd-gfx
Cc: Harry Wentland, Leo Li, Aurabindo Pillai, Roman Li, Wayne Lin,
Tom Chung, Fangzhi Zuo, Dan Wheeler, Ray Wu, Ivan Lipski,
Alex Hung, Chuanyu Tseng, Taimur Hassan, Chenyu Chen, Tom Chung
From: Taimur Hassan <Syed.Hassan@amd.com>
This version brings along the following updates:
- Add allow_clock_gating to dcn42 dccg.
- Bypass post csc for additional color spaces in dcn42.
- Remove unused dml2_project.
- Unset Replay desync error verification by default.
- Align HWSS fast commit path with legacy path.
- Fix implicit narrowing conversion warnings.
- Fix double free.
- Introduce power module on Linux.
- Add power module on Linux.
- Fix fpu guard warning.
- Add Replay/PSR active check in link loss status check.
- Remove SYMCLK F and G values from link encoder and MANUAL_FLOW_CONTROL from optc.
- Add minimum vfp requirement.
- Fix narrowing boundaries and eDP parser assignment.
- Fix dml2_0 narrowing boundaries.
- Add README.md file to DML2_0 repository.
- Fix DPMS using partially updated pipe context.
- Move dml2_destroy to non-FPU compilation unit.
Signed-off-by: Taimur Hassan <Syed.Hassan@amd.com>
Signed-off-by: Chenyu Chen <chen-yu.chen@amd.com>
Acked-by: Tom Chung <ChiaHsuan.Chung@amd.com>
---
drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
index 1b10b9770982..50ec5acb6c7b 100644
--- a/drivers/gpu/drm/amd/display/dc/dc.h
+++ b/drivers/gpu/drm/amd/display/dc/dc.h
@@ -63,7 +63,7 @@ struct dcn_dsc_reg_state;
struct dcn_optc_reg_state;
struct dcn_dccg_reg_state;
-#define DC_VER "3.2.378"
+#define DC_VER "3.2.379"
/**
* MAX_SURFACES - representative of the upper bound of surfaces that can be piped to a single CRTC
--
2.43.0
^ permalink raw reply related [flat|nested] 22+ messages in thread* RE: [PATCH 00/19] DC Patches Apr 20 2026
2026-04-15 7:39 [PATCH 00/19] DC Patches Apr 20 2026 Chenyu Chen
` (18 preceding siblings ...)
2026-04-15 7:39 ` [PATCH 19/19] drm/amd/display: Promote DC to 3.2.379 Chenyu Chen
@ 2026-04-20 12:54 ` Wheeler, Daniel
19 siblings, 0 replies; 22+ messages in thread
From: Wheeler, Daniel @ 2026-04-20 12:54 UTC (permalink / raw)
To: Chen, Chen-Yu, amd-gfx@lists.freedesktop.org
Cc: Wentland, Harry, Li, Sun peng (Leo), Pillai, Aurabindo, Li, Roman,
Lin, Wayne, Chung, ChiaHsuan (Tom), Zuo, Jerry, Wu, Ray,
LIPSKI, IVAN, Hung, Alex, Tseng, Chuan Yu (Max), Chen, Chen-Yu
[Public]
Hi all,
This week this patchset was tested on 4 systems, two dGPU and two APU based, and tested across multiple display and connection types.
APU
* Single Display eDP -> 1080p 60hz, 1920x1200 165hz, 3840x2400 60hz
* Single Display DP (SST DSC) -> 4k144hz, 4k240hz
* Multi display -> eDP + DP/HDMI/USB-C -> 1080p 60hz eDP + 4k 144hz, 4k 240hz (Includes USB-C to DP/HDMI adapters)
* Thunderbolt -> LG Ultrafine 5k
* MST DSC -> Cable Matters 101075 (DP to 3x DP) with 3x 4k60hz displays, HP Hook G2 with 2x 4k60hz displays
* USB 4 -> HP Hook G4, Lenovo Thunderbolt Dock, both with 2x 4k60hz DP and 1x 4k60hz HDMI displays
* SST PCON -> Club3D CAC-1085 + 1x 4k 144hz, FRL3, at a max resolution supported by the dongle of 4k 120hz YUV420 12bpc.
* MST PCON -> 1x 4k 144hz, FRL3, at a max resolution supported by the adapter of 4k 120hz RGB 8bpc.
DGPU
* Single Display DP (SST DSC) -> 4k144hz, 4k240hz
* Multiple Display DP -> 4k240hz + 4k144hz
* MST (Startech MST14DP123DP [DP to 3x DP] and 2x 4k 60hz displays)
* MST DSC (with Cable Matters 101075 [DP to 3x DP] with 3x 4k60hz displays)
The testing is a mix of automated and manual tests. Manual testing includes (but is not limited to)
* Changing display configurations and settings
* Video/Audio playback
* Benchmark testing
* Suspend/Resume testing
* Feature testing (Freesync, HDCP, etc.)
Automated testing includes (but is not limited to)
* Script testing (scripts to automate some of the manual checks)
* IGT testing
The testing is mainly tested on the following displays, but occasionally there are tests with other displays
* Samsung G8 Neo 4k240hz
* Samsung QN55QN95B 4k 120hz
* Acer XV322QKKV 4k144hz
* HP U27 4k Wireless 4k60hz
* LG 27UD58B 4k60hz
* LG 32UN650WA 4k60hz
* LG Ultrafine 5k 5k60hz
* AU Optronics B140HAN01.1 1080p 60hz eDP
* AU Optronics B160UAN01.J 1920x1200 165hz eDP
* Samsung ATNA60YV02-0 3840x2400 60Hz OLED eDP
The patchset consists of the amd-staging-drm-next branch (Head commit - 5fc862467a34397739ce66733b6344c4e671ef50 -> drm/amdgpu/userq: unmap_helper dont return the queue state) with new patches added on top of it.
Tested on Ubuntu 24.04.3, on Wayland and X11, using Gnome.
Tested-by: Dan Wheeler <daniel.wheeler@amd.com>
Thank you,
Dan Wheeler
Sr. Technologist | AMD
SW Display
------------------------------------------------------------------------------------------------------------------
1 Commerce Valley Dr E, Thornhill, ON L3T 7X6
amd.com
Thank you,
Dan Wheeler
Sr. Technologist | AMD
SW Display
------------------------------------------------------------------------------------------------------------------
1 Commerce Valley Dr E, Thornhill, ON L3T 7X6
amd.com
-----Original Message-----
From: Chenyu Chen <chen-yu.chen@amd.com>
Sent: Wednesday, April 15, 2026 3:40 AM
To: amd-gfx@lists.freedesktop.org
Cc: Wentland, Harry <Harry.Wentland@amd.com>; Li, Sun peng (Leo) <Sunpeng.Li@amd.com>; Pillai, Aurabindo <Aurabindo.Pillai@amd.com>; Li, Roman <Roman.Li@amd.com>; Lin, Wayne <Wayne.Lin@amd.com>; Chung, ChiaHsuan (Tom) <ChiaHsuan.Chung@amd.com>; Zuo, Jerry <Jerry.Zuo@amd.com>; Wheeler, Daniel <Daniel.Wheeler@amd.com>; Wu, Ray <Ray.Wu@amd.com>; LIPSKI, IVAN <IVAN.LIPSKI@amd.com>; Hung, Alex <Alex.Hung@amd.com>; Tseng, Chuan Yu (Max) <ChuanYu.Tseng@amd.com>; Chen, Chen-Yu <Chen-Yu.Chen@amd.com>
Subject: [PATCH 00/19] DC Patches Apr 20 2026
This DC patchset brings improvements in multiple areas. In summary, we highlight:
- Add allow_clock_gating to dcn42 dccg.
- Bypass post csc for additional color spaces in dcn42.
- Remove unused dml2_project.
- Unset Replay desync error verification by default.
- Align HWSS fast commit path with legacy path.
- Fix implicit narrowing conversion warnings.
- Enable driver power gating.
- Fix double free.
- Introduce power module on Linux.
- Add power module on Linux.
- Fix fpu guard warning.
- Add Replay/PSR active check in link loss status check.
- Remove SYMCLK F and G values from link encoder and MANUAL_FLOW_CONTROL from optc.
- Add minimum vfp requirement.
- Fix narrowing boundaries and eDP parser assignment.
- Fix dml2_0 narrowing boundaries.
- Add README.md file to DML2_0 repository.
- Fix DPMS using partially updated pipe context.
- Move dml2_destroy to non-FPU compilation unit.
Cc: Daniel Wheeler <daniel.wheeler@amd.com>
Allen Li (2):
drm/amd/display: Unset Replay desync error verification by default
drm/amd/display: Add Replay/PSR active check in link loss status check
Andrew Lichmanov (1):
drm/amd/display: Remove SYMCLK F and G values from link encoder and
MANUAL_FLOW_CONTROL from optc
Dillon Varone (1):
drm/amd/display: Add minimum vfp requirement
Dominik Kaszewski (1):
drm/amd/display: Fix DPMS using partially updated pipe context
Gaghik Khachatrian (3):
drm/amd/display: Fix implicit narrowing conversion warnings
drm/amd/display: Fix narrowing boundaries and eDP parser assignment
drm/amd/display: Fix dml2_0 narrowing boundaries
Ilya Bakoulin (1):
drm/amd/display: Fix double free
Rafal Ostrowski (2):
drm/amd/display: Align HWSS fast commit path with legacy path
drm/amd/display: Move dml2_destroy to non-FPU compilation unit
Ray Wu (2):
drm/amd/display: Introduce power module on Linux
drm/amd/display: Add power module on Linux
Roman Li (3):
drm/amd/display: Add allow_clock_gating to dcn42 dccg
drm/amd/display: bypass post csc for additional color spaces in dcn42
drm/amd/display: Remove unused dml2_project
Samson Tam (1):
drm/amd/display: Add README.md file to DML2_0 repository
Taimur Hassan (1):
drm/amd/display: Promote DC to 3.2.379
Wayne Lin (1):
drm/amd/display: Fix fpu guard warning
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 344 +-
.../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h | 10 +
.../drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c | 36 +-
.../amd/display/amdgpu_dm/amdgpu_dm_crtc.c | 74 +-
.../amd/display/amdgpu_dm/amdgpu_dm_crtc.h | 5 +-
.../amd/display/amdgpu_dm/amdgpu_dm_debugfs.c | 60 +-
.../drm/amd/display/amdgpu_dm/amdgpu_dm_ism.c | 26 +-
.../drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c | 242 +-
.../drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h | 13 +-
.../amd/display/amdgpu_dm/amdgpu_dm_replay.c | 143 +-
.../amd/display/amdgpu_dm/amdgpu_dm_replay.h | 28 +-
.../display/amdgpu_dm/amdgpu_dm_services.c | 27 +
.../drm/amd/display/dc/basics/custom_float.c | 2 +-
.../gpu/drm/amd/display/dc/basics/dce_calcs.c | 2 +-
.../gpu/drm/amd/display/dc/bios/bios_parser.c | 6 +-
.../drm/amd/display/dc/bios/bios_parser2.c | 21 +-
.../drm/amd/display/dc/bios/command_table.c | 12 +-
.../drm/amd/display/dc/bios/command_table2.c | 4 +-
.../dc/clk_mgr/dce110/dce110_clk_mgr.c | 6 +-
.../amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c | 12 +-
.../display/dc/clk_mgr/dcn30/dcn30_clk_mgr.c | 39 +-
.../display/dc/clk_mgr/dcn301/vg_clk_mgr.c | 13 +-
.../display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c | 14 +-
.../dc/clk_mgr/dcn314/dcn314_clk_mgr.c | 14 +-
.../dc/clk_mgr/dcn315/dcn315_clk_mgr.c | 15 +-
.../dc/clk_mgr/dcn316/dcn316_clk_mgr.c | 15 +-
.../display/dc/clk_mgr/dcn32/dcn32_clk_mgr.c | 43 +-
.../display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c | 16 +-
.../dc/clk_mgr/dcn401/dcn401_clk_mgr.c | 40 +-
.../display/dc/clk_mgr/dcn42/dcn42_clk_mgr.c | 22 +-
.../amd/display/dc/clk_mgr/dcn42/dcn42_smu.c | 1 -
drivers/gpu/drm/amd/display/dc/core/dc.c | 206 +-
.../drm/amd/display/dc/core/dc_hw_sequencer.c | 565 ++-
.../gpu/drm/amd/display/dc/core/dc_resource.c | 66 +-
.../gpu/drm/amd/display/dc/core/dc_stream.c | 26 +-
.../gpu/drm/amd/display/dc/core/dc_surface.c | 2 +-
drivers/gpu/drm/amd/display/dc/dc.h | 17 +-
drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c | 137 +-
drivers/gpu/drm/amd/display/dc/dc_fused_io.c | 6 +-
drivers/gpu/drm/amd/display/dc/dc_helper.c | 12 +-
drivers/gpu/drm/amd/display/dc/dc_stream.h | 38 +-
drivers/gpu/drm/amd/display/dc/dc_types.h | 30 +
.../amd/display/dc/dccg/dcn31/dcn31_dccg.c | 4 +-
.../amd/display/dc/dccg/dcn401/dcn401_dccg.c | 20 +-
.../amd/display/dc/dccg/dcn42/dcn42_dccg.c | 2 +
drivers/gpu/drm/amd/display/dc/dce/dce_aux.c | 4 +-
.../drm/amd/display/dc/dce/dce_clock_source.c | 24 +-
.../gpu/drm/amd/display/dc/dce/dce_i2c_hw.c | 2 +-
.../gpu/drm/amd/display/dc/dce/dce_i2c_sw.c | 16 +-
.../drm/amd/display/dc/dce/dce_panel_cntl.c | 4 +-
.../drm/amd/display/dc/dce/dce_transform.c | 8 +-
.../gpu/drm/amd/display/dc/dce/dmub_abm_lcd.c | 14 +-
drivers/gpu/drm/amd/display/dc/dce/dmub_psr.c | 12 +-
.../gpu/drm/amd/display/dc/dce/dmub_replay.c | 23 +-
.../display/dc/dce80/dce80_timing_generator.c | 2 +-
.../amd/display/dc/dcn10/dcn10_cm_common.c | 4 +-
.../drm/amd/display/dc/dcn30/dcn30_mmhubbub.c | 16 +-
.../dc/dio/dcn401/dcn401_dio_stream_encoder.c | 2 +-
.../dc/dio/dcn42/dcn42_dio_link_encoder.h | 2 -
.../dc/dio/dcn42/dcn42_dio_stream_encoder.c | 4 +-
.../drm/amd/display/dc/dml/calcs/dcn_calcs.c | 5 +-
.../drm/amd/display/dc/dml/dcn20/dcn20_fpu.c | 9 +-
.../drm/amd/display/dc/dml/dcn20/dcn20_fpu.h | 2 +-
.../drm/amd/display/dc/dml/dcn30/dcn30_fpu.c | 2 +-
.../drm/amd/display/dc/dml/dcn31/dcn31_fpu.c | 6 +-
.../drm/amd/display/dc/dml/dcn31/dcn31_fpu.h | 6 +-
.../drm/amd/display/dc/dml/dcn32/dcn32_fpu.c | 37 +-
.../gpu/drm/amd/display/dc/dml2_0/README.md | 31 +
.../amd/display/dc/dml2_0/display_mode_core.c | 14 +-
.../amd/display/dc/dml2_0/display_mode_util.c | 20 +-
.../dml2_0/dml21/dml21_translation_helper.c | 15 +-
.../amd/display/dc/dml2_0/dml21/dml21_utils.c | 2 +-
.../display/dc/dml2_0/dml21/dml21_wrapper.c | 4 +-
.../dc/dml2_0/dml21/inc/dml_top_types.h | 1 -
.../dml21/src/dml2_core/dml2_core_factory.c | 1 -
.../dml21/src/dml2_dpmm/dml2_dpmm_factory.c | 1 -
.../dml21/src/dml2_mcg/dml2_mcg_factory.c | 1 -
.../dml21/src/dml2_pmo/dml2_pmo_factory.c | 3 +-
.../dml21/src/dml2_top/dml2_top_interfaces.c | 1 -
.../amd/display/dc/dml2_0/dml2_mall_phantom.c | 100 +-
.../drm/amd/display/dc/dml2_0/dml2_policy.c | 6 +-
.../dc/dml2_0/dml2_translation_helper.c | 4 +-
.../drm/amd/display/dc/dml2_0/dml2_utils.c | 40 +-
.../drm/amd/display/dc/dml2_0/dml2_wrapper.c | 11 +
.../amd/display/dc/dml2_0/dml2_wrapper_fpu.c | 10 -
.../drm/amd/display/dc/dpp/dcn42/dcn42_dpp.c | 6 +-
.../drm/amd/display/dc/dsc/dcn20/dcn20_dsc.c | 16 +-
.../gpu/drm/amd/display/dc/dsc/rc_calc_dpi.c | 33 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_ddc.c | 2 +-
.../gpu/drm/amd/display/dc/gpio/hw_generic.c | 2 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.c | 2 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_gpio.h | 9 +
drivers/gpu/drm/amd/display/dc/gpio/hw_hpd.c | 2 +-
.../display/dc/hubbub/dcn10/dcn10_hubbub.c | 16 +-
.../display/dc/hubbub/dcn20/dcn20_hubbub.c | 28 +-
.../display/dc/hubbub/dcn20/dcn20_hubbub.h | 3 +
.../display/dc/hubbub/dcn21/dcn21_hubbub.c | 12 +-
.../display/dc/hubbub/dcn30/dcn30_hubbub.c | 12 +-
.../display/dc/hubbub/dcn31/dcn31_hubbub.c | 12 +-
.../amd/display/dc/hubp/dcn20/dcn20_hubp.c | 4 +-
.../amd/display/dc/hubp/dcn21/dcn21_hubp.c | 4 +-
.../amd/display/dc/hubp/dcn30/dcn30_hubp.c | 4 +-
.../amd/display/dc/hwss/dce110/dce110_hwseq.c | 33 +-
.../amd/display/dc/hwss/dce120/dce120_hwseq.c | 12 +-
.../amd/display/dc/hwss/dcn10/dcn10_hwseq.c | 12 +-
.../amd/display/dc/hwss/dcn20/dcn20_hwseq.c | 13 +-
.../amd/display/dc/hwss/dcn21/dcn21_hwseq.c | 8 +-
.../amd/display/dc/hwss/dcn30/dcn30_hwseq.c | 16 +-
.../amd/display/dc/hwss/dcn314/dcn314_hwseq.c | 4 +-
.../amd/display/dc/hwss/dcn32/dcn32_hwseq.c | 4 +-
.../amd/display/dc/hwss/dcn35/dcn35_hwseq.c | 6 +-
.../amd/display/dc/hwss/dcn401/dcn401_hwseq.c | 18 +-
.../drm/amd/display/dc/hwss/hw_sequencer.h | 138 +
drivers/gpu/drm/amd/display/dc/inc/bw_fixed.h | 2 +-
.../dc/irq/dce110/irq_service_dce110.c | 2 +-
.../display/dc/link/accessories/link_dp_cts.c | 37 +-
.../display/dc/link/hwss/link_hwss_hpo_dp.c | 4 +-
.../drm/amd/display/dc/link/link_detection.c | 4 +-
.../gpu/drm/amd/display/dc/link/link_dpms.c | 16 +-
.../drm/amd/display/dc/link/link_factory.c | 6 +-
.../amd/display/dc/link/protocols/link_ddc.c | 5 +-
.../dc/link/protocols/link_dp_capability.c | 2 +-
.../display/dc/link/protocols/link_dp_dpia.c | 2 +-
.../dc/link/protocols/link_dp_dpia_bw.c | 10 +-
.../dc/link/protocols/link_dp_irq_handler.c | 57 +-
.../dc/link/protocols/link_dp_panel_replay.c | 27 +-
.../link/protocols/link_dp_training_8b_10b.c | 10 +-
.../link/protocols/link_edp_panel_control.c | 41 +-
.../dc/mmhubbub/dcn20/dcn20_mmhubbub.c | 4 +-
.../dc/mmhubbub/dcn32/dcn32_mmhubbub.c | 4 +-
.../amd/display/dc/optc/dcn20/dcn20_optc.c | 4 +-
.../amd/display/dc/optc/dcn42/dcn42_optc.h | 1 -
.../dc/resource/dce110/dce110_resource.c | 4 +-
.../dc/resource/dcn10/dcn10_resource.c | 4 +-
.../dc/resource/dcn20/dcn20_resource.c | 24 +-
.../dc/resource/dcn21/dcn21_resource.c | 9 +-
.../dc/resource/dcn30/dcn30_resource.c | 14 +-
.../dc/resource/dcn301/dcn301_resource.c | 8 +-
.../dc/resource/dcn302/dcn302_resource.c | 4 +-
.../dc/resource/dcn303/dcn303_resource.c | 4 +-
.../dc/resource/dcn31/dcn31_resource.c | 11 +-
.../dc/resource/dcn314/dcn314_resource.c | 4 +-
.../dc/resource/dcn315/dcn315_resource.c | 11 +-
.../dc/resource/dcn316/dcn316_resource.c | 11 +-
.../dc/resource/dcn32/dcn32_resource.c | 20 +-
.../resource/dcn32/dcn32_resource_helpers.c | 2 +-
.../dc/resource/dcn321/dcn321_resource.c | 4 +-
.../dc/resource/dcn35/dcn35_resource.c | 4 +-
.../dc/resource/dcn351/dcn351_resource.c | 4 +-
.../dc/resource/dcn36/dcn36_resource.c | 4 +-
.../dc/resource/dcn401/dcn401_resource.c | 4 +-
.../dc/resource/dcn42/dcn42_resource.c | 4 +-
.../dcn401/dcn401_soc_and_ip_translator.c | 42 +-
.../dcn42/dcn42_soc_and_ip_translator.c | 14 +-
.../drm/amd/display/modules/inc/mod_power.h | 415 +++
.../display/modules/info_packet/info_packet.c | 2 +-
.../drm/amd/display/modules/power/Makefile | 2 +-
.../gpu/drm/amd/display/modules/power/power.c | 3030 +++++++++++++++++
.../amd/display/modules/power/power_helpers.c | 16 +-
159 files changed, 5833 insertions(+), 1416 deletions(-) create mode 100644 drivers/gpu/drm/amd/display/dc/dml2_0/README.md
create mode 100644 drivers/gpu/drm/amd/display/modules/inc/mod_power.h
create mode 100644 drivers/gpu/drm/amd/display/modules/power/power.c
--
2.43.0
^ permalink raw reply [flat|nested] 22+ messages in thread