* [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs
@ 2026-02-28 4:53 Rosen Penev
2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 1/2] drm/amd/display: Add pixel_clock to amd_pp_display_configuration Rosen Penev
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Rosen Penev @ 2026-02-28 4:53 UTC (permalink / raw)
To: stable
Cc: Harry Wentland, Leo Li, Rodrigo Siqueira, Alex Deucher,
Christian König, Xinhui Pan, David Airlie, Simona Vetter,
Kenneth Feng, Timur Kristóf, Alex Hung, Greg Kroah-Hartman,
Lijo Lazar, chr[], Sasha Levin, Wentao Liang,
open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list
Because of incomplete backports to stable kernels, DC ended up breaking
on older GCN 1 GPUs. This patchset adds the missing upstream commits to
at least fix the panic/black screen on boot.
They are applicable to 6.12, 6.6, and 6.1 as those are the currently
supported kernels that 7009e3af0474aca5f64262b3c72fb6e23b232f9b got
backported to.
6.1 needs two extra backports for these two commits to be cherry-picked
cleanly. Those are
96ce96f8773da4814622fd97e5226915a2c30706
d09ef243035b75a6d403ebfeb7e87fa20d7e25c6
v2: Add Signed-off-by.
Timur Kristóf (2):
drm/amd/display: Add pixel_clock to amd_pp_display_configuration
drm/amd/pm: Use pm_display_cfg in legacy DPM (v2)
.../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 1 +
.../dc/clk_mgr/dce110/dce110_clk_mgr.c | 2 +-
.../drm/amd/display/dc/dm_services_types.h | 2 +-
drivers/gpu/drm/amd/include/dm_pp_interface.h | 1 +
drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c | 67 +++++++++++++++++++
.../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h | 2 +
drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 4 +-
.../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 6 +-
drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 65 ++++++------------
.../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 11 +--
10 files changed, 101 insertions(+), 60 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 11+ messages in thread* [PATCHv2 for 6.112 and 6.6 1/2] drm/amd/display: Add pixel_clock to amd_pp_display_configuration 2026-02-28 4:53 [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev @ 2026-02-28 4:53 ` Rosen Penev 2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 2/2] drm/amd/pm: Use pm_display_cfg in legacy DPM (v2) Rosen Penev 2026-03-04 4:03 ` [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev 2 siblings, 0 replies; 11+ messages in thread From: Rosen Penev @ 2026-02-28 4:53 UTC (permalink / raw) To: stable Cc: Harry Wentland, Leo Li, Rodrigo Siqueira, Alex Deucher, Christian König, Xinhui Pan, David Airlie, Simona Vetter, Kenneth Feng, Timur Kristóf, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list From: Timur Kristóf <timur.kristof@gmail.com> [ Upstream commit b515dcb0dc4e85d8254f5459cfb32fce88dacbfb ] This commit adds the pixel_clock field to the display config struct so that power management (DPM) can use it. We currently don't have a proper bandwidth calculation on old GPUs with DCE 6-10 because dce_calcs only supports DCE 11+. So the power management (DPM) on these GPUs may need to make ad-hoc decisions for display based on the pixel clock. Also rename sym_clock to pixel_clock in dm_pp_single_disp_config to avoid confusion with other code where the sym_clock refers to the DisplayPort symbol clock. Signed-off-by: Timur Kristóf <timur.kristof@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Rosen Penev <rosenp@gmail.com> --- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 1 + drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c | 2 +- drivers/gpu/drm/amd/display/dc/dm_services_types.h | 2 +- drivers/gpu/drm/amd/include/dm_pp_interface.h | 1 + 4 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c index 848c5b4bb301..016230896d0e 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c @@ -97,6 +97,7 @@ bool dm_pp_apply_display_requirements( const struct dm_pp_single_disp_config *dc_cfg = &pp_display_cfg->disp_configs[i]; adev->pm.pm_display_cfg.displays[i].controller_id = dc_cfg->pipe_idx + 1; + adev->pm.pm_display_cfg.displays[i].pixel_clock = dc_cfg->pixel_clock; } amdgpu_dpm_display_configuration_change(adev, &adev->pm.pm_display_cfg); diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c index 13cf415e38e5..d50b9440210e 100644 --- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c +++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dce110/dce110_clk_mgr.c @@ -164,7 +164,7 @@ void dce110_fill_display_configs( stream->link->cur_link_settings.link_rate; cfg->link_settings.link_spread = stream->link->cur_link_settings.link_spread; - cfg->sym_clock = stream->phy_pix_clk; + cfg->pixel_clock = stream->phy_pix_clk; /* Round v_refresh*/ cfg->v_refresh = stream->timing.pix_clk_100hz * 100; cfg->v_refresh /= stream->timing.h_total; diff --git a/drivers/gpu/drm/amd/display/dc/dm_services_types.h b/drivers/gpu/drm/amd/display/dc/dm_services_types.h index facf269c4326..b4eefe3ce7c7 100644 --- a/drivers/gpu/drm/amd/display/dc/dm_services_types.h +++ b/drivers/gpu/drm/amd/display/dc/dm_services_types.h @@ -127,7 +127,7 @@ struct dm_pp_single_disp_config { uint32_t src_height; uint32_t src_width; uint32_t v_refresh; - uint32_t sym_clock; /* HDMI only */ + uint32_t pixel_clock; /* Pixel clock in KHz (for HDMI only: normalized) */ struct dc_link_settings link_settings; /* DP only */ }; diff --git a/drivers/gpu/drm/amd/include/dm_pp_interface.h b/drivers/gpu/drm/amd/include/dm_pp_interface.h index acd1cef61b7c..349544504c93 100644 --- a/drivers/gpu/drm/amd/include/dm_pp_interface.h +++ b/drivers/gpu/drm/amd/include/dm_pp_interface.h @@ -65,6 +65,7 @@ struct single_display_configuration { uint32_t view_resolution_cy; enum amd_pp_display_config_type displayconfigtype; uint32_t vertical_refresh; /* for active display */ + uint32_t pixel_clock; /* Pixel clock in KHz (for HDMI only: normalized) */ }; #define MAX_NUM_DISPLAY 32 -- 2.53.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 for 6.112 and 6.6 2/2] drm/amd/pm: Use pm_display_cfg in legacy DPM (v2) 2026-02-28 4:53 [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev 2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 1/2] drm/amd/display: Add pixel_clock to amd_pp_display_configuration Rosen Penev @ 2026-02-28 4:53 ` Rosen Penev 2026-03-04 4:03 ` [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev 2 siblings, 0 replies; 11+ messages in thread From: Rosen Penev @ 2026-02-28 4:53 UTC (permalink / raw) To: stable Cc: Harry Wentland, Leo Li, Rodrigo Siqueira, Alex Deucher, Christian König, Xinhui Pan, David Airlie, Simona Vetter, Kenneth Feng, Timur Kristóf, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list From: Timur Kristóf <timur.kristof@gmail.com> [ Upstream commit 9d73b107a61b73e7101d4b728ddac3d2c77db111 ] This commit is necessary for DC to function well with chips that use the legacy power management code, ie. SI and KV. Communicate display information from DC to the legacy PM code. Currently DC uses pm_display_cfg to communicate power management requirements from the display code to the DPM code. However, the legacy (non-DC) code path used different fields and therefore could not take into account anything from DC. Change the legacy display code to fill the same pm_display_cfg struct as DC and use the same in the legacy DPM code. To ease review and reduce churn, this commit does not yet delete the now unneeded code, that is done in the next commit. v2: Rebase. Fix single_display in amdgpu_dpm_pick_power_state. Signed-off-by: Timur Kristóf <timur.kristof@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Rosen Penev <rosenp@gmail.com> --- drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c | 67 +++++++++++++++++++ .../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h | 2 + drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 4 +- .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 6 +- drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 65 ++++++------------ .../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 11 +-- 6 files changed, 97 insertions(+), 58 deletions(-) diff --git a/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c b/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c index 2d2d2d5e6763..9ef965e4a92e 100644 --- a/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c +++ b/drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c @@ -100,3 +100,70 @@ u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev) return vrefresh; } + +void amdgpu_dpm_get_display_cfg(struct amdgpu_device *adev) +{ + struct drm_device *ddev = adev_to_drm(adev); + struct amd_pp_display_configuration *cfg = &adev->pm.pm_display_cfg; + struct single_display_configuration *display_cfg; + struct drm_crtc *crtc; + struct amdgpu_crtc *amdgpu_crtc; + struct amdgpu_connector *conn; + int num_crtcs = 0; + int vrefresh; + u32 vblank_in_pixels, vblank_time_us; + + cfg->min_vblank_time = 0xffffffff; /* if the displays are off, vblank time is max */ + + if (adev->mode_info.num_crtc && adev->mode_info.mode_config_initialized) { + list_for_each_entry(crtc, &ddev->mode_config.crtc_list, head) { + amdgpu_crtc = to_amdgpu_crtc(crtc); + + /* The array should only contain active displays. */ + if (!amdgpu_crtc->enabled) + continue; + + conn = to_amdgpu_connector(amdgpu_crtc->connector); + display_cfg = &adev->pm.pm_display_cfg.displays[num_crtcs++]; + + if (amdgpu_crtc->hw_mode.clock) { + vrefresh = drm_mode_vrefresh(&amdgpu_crtc->hw_mode); + + vblank_in_pixels = + amdgpu_crtc->hw_mode.crtc_htotal * + (amdgpu_crtc->hw_mode.crtc_vblank_end - + amdgpu_crtc->hw_mode.crtc_vdisplay + + (amdgpu_crtc->v_border * 2)); + + vblank_time_us = + vblank_in_pixels * 1000 / amdgpu_crtc->hw_mode.clock; + + /* The legacy (non-DC) code has issues with mclk switching + * with refresh rates over 120 Hz. Disable mclk switching. + */ + if (vrefresh > 120) + vblank_time_us = 0; + + /* Find minimum vblank time. */ + if (vblank_time_us < cfg->min_vblank_time) + cfg->min_vblank_time = vblank_time_us; + + /* Find vertical refresh rate of first active display. */ + if (!cfg->vrefresh) + cfg->vrefresh = vrefresh; + } + + if (amdgpu_crtc->crtc_id < cfg->crtc_index) { + /* Find first active CRTC and its line time. */ + cfg->crtc_index = amdgpu_crtc->crtc_id; + cfg->line_time_in_us = amdgpu_crtc->line_time; + } + + display_cfg->controller_id = amdgpu_crtc->crtc_id; + display_cfg->pixel_clock = conn->pixelclock_for_modeset; + } + } + + cfg->display_clk = adev->clock.default_dispclk; + cfg->num_display = num_crtcs; +} diff --git a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h index 5c2a89f0d5d5..8be11510cd92 100644 --- a/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h +++ b/drivers/gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h @@ -29,4 +29,6 @@ u32 amdgpu_dpm_get_vblank_time(struct amdgpu_device *adev); u32 amdgpu_dpm_get_vrefresh(struct amdgpu_device *adev); +void amdgpu_dpm_get_display_cfg(struct amdgpu_device *adev); + #endif diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c index 6b34a33d788f..8cf7e517da84 100644 --- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c @@ -2299,7 +2299,7 @@ static void kv_apply_state_adjust_rules(struct amdgpu_device *adev, if (pi->sys_info.nb_dpm_enable) { force_high = (mclk >= pi->sys_info.nbp_memory_clock[3]) || - pi->video_start || (adev->pm.dpm.new_active_crtc_count >= 3) || + pi->video_start || (adev->pm.pm_display_cfg.num_display >= 3) || pi->disable_nb_ps3_in_battery; ps->dpm0_pg_nb_ps_lo = force_high ? 0x2 : 0x3; ps->dpm0_pg_nb_ps_hi = 0x2; @@ -2358,7 +2358,7 @@ static int kv_calculate_nbps_level_settings(struct amdgpu_device *adev) return 0; force_high = ((mclk >= pi->sys_info.nbp_memory_clock[3]) || - (adev->pm.dpm.new_active_crtc_count >= 3) || pi->video_start); + (adev->pm.pm_display_cfg.num_display >= 3) || pi->video_start); if (force_high) { for (i = pi->lowest_valid; i <= pi->highest_valid; i++) diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c index c7518b13e787..8eb121db2ce4 100644 --- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c @@ -797,8 +797,7 @@ static struct amdgpu_ps *amdgpu_dpm_pick_power_state(struct amdgpu_device *adev, int i; struct amdgpu_ps *ps; u32 ui_class; - bool single_display = (adev->pm.dpm.new_active_crtc_count < 2) ? - true : false; + bool single_display = adev->pm.pm_display_cfg.num_display < 2; /* check if the vblank period is too short to adjust the mclk */ if (single_display && adev->powerplay.pp_funcs->vblank_too_short) { @@ -994,7 +993,8 @@ void amdgpu_legacy_dpm_compute_clocks(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; - amdgpu_dpm_get_active_displays(adev); + if (!adev->dc_enabled) + amdgpu_dpm_get_display_cfg(adev); amdgpu_dpm_change_power_state_locked(adev); } diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c index 29cecfab0704..7ea310601ff5 100644 --- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c +++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c @@ -3058,7 +3058,7 @@ static int si_get_vce_clock_voltage(struct amdgpu_device *adev, static bool si_dpm_vblank_too_short(void *handle) { struct amdgpu_device *adev = (struct amdgpu_device *)handle; - u32 vblank_time = amdgpu_dpm_get_vblank_time(adev); + u32 vblank_time = adev->pm.pm_display_cfg.min_vblank_time; /* we never hit the non-gddr5 limit so disable it */ u32 switch_limit = adev->gmc.vram_type == AMDGPU_VRAM_TYPE_GDDR5 ? 450 : 0; @@ -3424,9 +3424,10 @@ static void rv770_get_engine_memory_ss(struct amdgpu_device *adev) static void si_apply_state_adjust_rules(struct amdgpu_device *adev, struct amdgpu_ps *rps) { + const struct amd_pp_display_configuration *display_cfg = + &adev->pm.pm_display_cfg; struct si_ps *ps = si_get_ps(rps); struct amdgpu_clock_and_voltage_limits *max_limits; - struct amdgpu_connector *conn; bool disable_mclk_switching = false; bool disable_sclk_switching = false; u32 mclk, sclk; @@ -3470,14 +3471,9 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev, * For example, 4K 60Hz and 1080p 144Hz fall into this category. * Find number of such displays connected. */ - for (i = 0; i < adev->mode_info.num_crtc; i++) { - if (!(adev->pm.dpm.new_active_crtcs & (1 << i)) || - !adev->mode_info.crtcs[i]->enabled) - continue; - - conn = to_amdgpu_connector(adev->mode_info.crtcs[i]->connector); - - if (conn->pixelclock_for_modeset > 297000) + for (i = 0; i < display_cfg->num_display; i++) { + /* The array only contains active displays. */ + if (display_cfg->displays[i].pixel_clock > 297000) high_pixelclock_count++; } @@ -3510,7 +3506,7 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev, rps->ecclk = 0; } - if ((adev->pm.dpm.new_active_crtc_count > 1) || + if ((adev->pm.pm_display_cfg.num_display > 1) || si_dpm_vblank_too_short(adev)) disable_mclk_switching = true; @@ -3658,7 +3654,7 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev, ps->performance_levels[i].mclk, max_limits->vddc, &ps->performance_levels[i].vddc); btc_apply_voltage_dependency_rules(&adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk, - adev->clock.current_dispclk, + display_cfg->display_clk, max_limits->vddc, &ps->performance_levels[i].vddc); } @@ -4183,16 +4179,16 @@ static void si_program_ds_registers(struct amdgpu_device *adev) static void si_program_display_gap(struct amdgpu_device *adev) { + const struct amd_pp_display_configuration *cfg = &adev->pm.pm_display_cfg; u32 tmp, pipe; - int i; tmp = RREG32(CG_DISPLAY_GAP_CNTL) & ~(DISP1_GAP_MASK | DISP2_GAP_MASK); - if (adev->pm.dpm.new_active_crtc_count > 0) + if (cfg->num_display > 0) tmp |= DISP1_GAP(R600_PM_DISPLAY_GAP_VBLANK_OR_WM); else tmp |= DISP1_GAP(R600_PM_DISPLAY_GAP_IGNORE); - if (adev->pm.dpm.new_active_crtc_count > 1) + if (cfg->num_display > 1) tmp |= DISP2_GAP(R600_PM_DISPLAY_GAP_VBLANK_OR_WM); else tmp |= DISP2_GAP(R600_PM_DISPLAY_GAP_IGNORE); @@ -4202,17 +4198,8 @@ static void si_program_display_gap(struct amdgpu_device *adev) tmp = RREG32(DCCG_DISP_SLOW_SELECT_REG); pipe = (tmp & DCCG_DISP1_SLOW_SELECT_MASK) >> DCCG_DISP1_SLOW_SELECT_SHIFT; - if ((adev->pm.dpm.new_active_crtc_count > 0) && - (!(adev->pm.dpm.new_active_crtcs & (1 << pipe)))) { - /* find the first active crtc */ - for (i = 0; i < adev->mode_info.num_crtc; i++) { - if (adev->pm.dpm.new_active_crtcs & (1 << i)) - break; - } - if (i == adev->mode_info.num_crtc) - pipe = 0; - else - pipe = i; + if (cfg->num_display > 0 && pipe != cfg->crtc_index) { + pipe = cfg->crtc_index; tmp &= ~DCCG_DISP1_SLOW_SELECT_MASK; tmp |= DCCG_DISP1_SLOW_SELECT(pipe); @@ -4223,7 +4210,7 @@ static void si_program_display_gap(struct amdgpu_device *adev) * This can be a problem on PowerXpress systems or if you want to use the card * for offscreen rendering or compute if there are no crtcs enabled. */ - si_notify_smc_display_change(adev, adev->pm.dpm.new_active_crtc_count > 0); + si_notify_smc_display_change(adev, cfg->num_display > 0); } static void si_enable_spread_spectrum(struct amdgpu_device *adev, bool enable) @@ -5527,7 +5514,7 @@ static int si_convert_power_level_to_smc(struct amdgpu_device *adev, (pl->mclk <= pi->mclk_stutter_mode_threshold) && !eg_pi->uvd_enabled && (RREG32(DPG_PIPE_STUTTER_CONTROL) & STUTTER_ENABLE) && - (adev->pm.dpm.new_active_crtc_count <= 2)) { + (adev->pm.pm_display_cfg.num_display <= 2)) { level->mcFlags |= SISLANDS_SMC_MC_STUTTER_EN; } @@ -5676,7 +5663,7 @@ static bool si_is_state_ulv_compatible(struct amdgpu_device *adev, /* XXX validate against display requirements! */ for (i = 0; i < adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.count; i++) { - if (adev->clock.current_dispclk <= + if (adev->pm.pm_display_cfg.display_clk <= adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[i].clk) { if (ulv->pl.vddc < adev->pm.dpm.dyn_state.vddc_dependency_on_dispclk.entries[i].v) @@ -5830,30 +5817,22 @@ static int si_upload_ulv_state(struct amdgpu_device *adev) static int si_upload_smc_data(struct amdgpu_device *adev) { - struct amdgpu_crtc *amdgpu_crtc = NULL; - int i; + const struct amd_pp_display_configuration *cfg = &adev->pm.pm_display_cfg; u32 crtc_index = 0; u32 mclk_change_block_cp_min = 0; u32 mclk_change_block_cp_max = 0; - for (i = 0; i < adev->mode_info.num_crtc; i++) { - if (adev->pm.dpm.new_active_crtcs & (1 << i)) { - amdgpu_crtc = adev->mode_info.crtcs[i]; - break; - } - } - /* When a display is plugged in, program these so that the SMC * performs MCLK switching when it doesn't cause flickering. * When no display is plugged in, there is no need to restrict * MCLK switching, so program them to zero. */ - if (adev->pm.dpm.new_active_crtc_count && amdgpu_crtc) { - crtc_index = amdgpu_crtc->crtc_id; + if (cfg->num_display) { + crtc_index = cfg->crtc_index; - if (amdgpu_crtc->line_time) { - mclk_change_block_cp_min = 200 / amdgpu_crtc->line_time; - mclk_change_block_cp_max = 100 / amdgpu_crtc->line_time; + if (cfg->line_time_in_us) { + mclk_change_block_cp_min = 200 / cfg->line_time_in_us; + mclk_change_block_cp_max = 100 / cfg->line_time_in_us; } } diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c index 0115d26b5af9..24b25cddf0c1 100644 --- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c +++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c @@ -1569,16 +1569,7 @@ static void pp_pm_compute_clocks(void *handle) struct amdgpu_device *adev = hwmgr->adev; if (!adev->dc_enabled) { - amdgpu_dpm_get_active_displays(adev); - adev->pm.pm_display_cfg.num_display = adev->pm.dpm.new_active_crtc_count; - adev->pm.pm_display_cfg.vrefresh = amdgpu_dpm_get_vrefresh(adev); - adev->pm.pm_display_cfg.min_vblank_time = amdgpu_dpm_get_vblank_time(adev); - /* we have issues with mclk switching with - * refresh rates over 120 hz on the non-DC code. - */ - if (adev->pm.pm_display_cfg.vrefresh > 120) - adev->pm.pm_display_cfg.min_vblank_time = 0; - + amdgpu_dpm_get_display_cfg(adev); pp_display_configuration_change(handle, &adev->pm.pm_display_cfg); } -- 2.53.0 ^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-02-28 4:53 [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev 2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 1/2] drm/amd/display: Add pixel_clock to amd_pp_display_configuration Rosen Penev 2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 2/2] drm/amd/pm: Use pm_display_cfg in legacy DPM (v2) Rosen Penev @ 2026-03-04 4:03 ` Rosen Penev 2026-03-04 8:10 ` Christian König 2 siblings, 1 reply; 11+ messages in thread From: Rosen Penev @ 2026-03-04 4:03 UTC (permalink / raw) To: stable Cc: Harry Wentland, Leo Li, Rodrigo Siqueira, Alex Deucher, Christian König, Xinhui Pan, David Airlie, Simona Vetter, Kenneth Feng, Timur Kristóf, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list On Fri, Feb 27, 2026 at 8:54 PM Rosen Penev <rosenp@gmail.com> wrote: > > Because of incomplete backports to stable kernels, DC ended up breaking > on older GCN 1 GPUs. This patchset adds the missing upstream commits to > at least fix the panic/black screen on boot. > > They are applicable to 6.12, 6.6, and 6.1 as those are the currently > supported kernels that 7009e3af0474aca5f64262b3c72fb6e23b232f9b got > backported to. > > 6.1 needs two extra backports for these two commits to be cherry-picked > cleanly. Those are > > 96ce96f8773da4814622fd97e5226915a2c30706 > d09ef243035b75a6d403ebfeb7e87fa20d7e25c6 > > v2: Add Signed-off-by. Do I need to resend? > > Timur Kristóf (2): > drm/amd/display: Add pixel_clock to amd_pp_display_configuration > drm/amd/pm: Use pm_display_cfg in legacy DPM (v2) > > .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 1 + > .../dc/clk_mgr/dce110/dce110_clk_mgr.c | 2 +- > .../drm/amd/display/dc/dm_services_types.h | 2 +- > drivers/gpu/drm/amd/include/dm_pp_interface.h | 1 + > drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c | 67 +++++++++++++++++++ > .../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h | 2 + > drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 4 +- > .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 6 +- > drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 65 ++++++------------ > .../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 11 +-- > 10 files changed, 101 insertions(+), 60 deletions(-) > > -- > 2.53.0 > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-04 4:03 ` [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev @ 2026-03-04 8:10 ` Christian König 2026-03-04 9:09 ` Timur Kristóf 0 siblings, 1 reply; 11+ messages in thread From: Christian König @ 2026-03-04 8:10 UTC (permalink / raw) To: Rosen Penev Cc: Harry Wentland, Leo Li, Rodrigo Siqueira, Alex Deucher, Xinhui Pan, David Airlie, Simona Vetter, Kenneth Feng, Timur Kristóf, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list, Greg KH -stable +Greg On 3/4/26 05:03, Rosen Penev wrote: > On Fri, Feb 27, 2026 at 8:54 PM Rosen Penev <rosenp@gmail.com> wrote: >> >> Because of incomplete backports to stable kernels, DC ended up breaking >> on older GCN 1 GPUs. This patchset adds the missing upstream commits to >> at least fix the panic/black screen on boot. >> >> They are applicable to 6.12, 6.6, and 6.1 as those are the currently >> supported kernels that 7009e3af0474aca5f64262b3c72fb6e23b232f9b got >> backported to. >> >> 6.1 needs two extra backports for these two commits to be cherry-picked >> cleanly. Those are >> >> 96ce96f8773da4814622fd97e5226915a2c30706 >> d09ef243035b75a6d403ebfeb7e87fa20d7e25c6 >> >> v2: Add Signed-off-by. > Do I need to resend? Well first of all please stop sending those patches at all. When you want something backported then add the CC: stable tag to the original patch. If you find that some patch is already upstream which isn't correctly tagged then ping the relevant maintainers if that patch can be backported. But don't send stuff to the stable list all by yourself. Regards, Christian. >> >> Timur Kristóf (2): >> drm/amd/display: Add pixel_clock to amd_pp_display_configuration >> drm/amd/pm: Use pm_display_cfg in legacy DPM (v2) >> >> .../amd/display/amdgpu_dm/amdgpu_dm_pp_smu.c | 1 + >> .../dc/clk_mgr/dce110/dce110_clk_mgr.c | 2 +- >> .../drm/amd/display/dc/dm_services_types.h | 2 +- >> drivers/gpu/drm/amd/include/dm_pp_interface.h | 1 + >> drivers/gpu/drm/amd/pm/amdgpu_dpm_internal.c | 67 +++++++++++++++++++ >> .../gpu/drm/amd/pm/inc/amdgpu_dpm_internal.h | 2 + >> drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c | 4 +- >> .../gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c | 6 +- >> drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c | 65 ++++++------------ >> .../gpu/drm/amd/pm/powerplay/amd_powerplay.c | 11 +-- >> 10 files changed, 101 insertions(+), 60 deletions(-) >> >> -- >> 2.53.0 >> ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-04 8:10 ` Christian König @ 2026-03-04 9:09 ` Timur Kristóf 2026-03-04 10:06 ` Christian König 0 siblings, 1 reply; 11+ messages in thread From: Timur Kristóf @ 2026-03-04 9:09 UTC (permalink / raw) To: Rosen Penev, Christian König Cc: Harry Wentland, Leo Li, Rodrigo Siqueira, Alex Deucher, Xinhui Pan, David Airlie, Simona Vetter, Kenneth Feng, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list, Greg KH On Wednesday, March 4, 2026 9:10:02 AM Central European Standard Time Christian König wrote: > -stable +Greg > > On 3/4/26 05:03, Rosen Penev wrote: > > On Fri, Feb 27, 2026 at 8:54 PM Rosen Penev <rosenp@gmail.com> wrote: > >> Because of incomplete backports to stable kernels, DC ended up breaking > >> on older GCN 1 GPUs. This patchset adds the missing upstream commits to > >> at least fix the panic/black screen on boot. > >> > >> They are applicable to 6.12, 6.6, and 6.1 as those are the currently > >> supported kernels that 7009e3af0474aca5f64262b3c72fb6e23b232f9b got > >> backported to. > >> > >> 6.1 needs two extra backports for these two commits to be cherry-picked > >> cleanly. Those are > >> > >> 96ce96f8773da4814622fd97e5226915a2c30706 > >> d09ef243035b75a6d403ebfeb7e87fa20d7e25c6 > >> > >> v2: Add Signed-off-by. > > > > Do I need to resend? > > Well first of all please stop sending those patches at all. > > When you want something backported then add the CC: stable tag to the > original patch. > > If you find that some patch is already upstream which isn't correctly tagged > then ping the relevant maintainers if that patch can be backported. > > But don't send stuff to the stable list all by yourself. > > Regards, > Christian. Hi Everyone, The patches actually come from a branch of mine: https://gitlab.freedesktop.org/Venemo/linux/-/commits/v6.12.74_si_dc_fixes For context: The crash comes from a patch that I wrote for 6.18 that fixes some issues on the default, non-DC code path, that was backported to stable kernels. DC was not the default code path before Linux 6.19, so I didn't mark the patches that also fix DC for backporting, because I had assumed nobody uses the DC code path on these kernel versions. After a user reported to me that this causes issues for him with DC on 6.17 and older kernels, I sent a backported series to Greg and Sasha, in an email thread with the subject line "Fixing an amdgpu crash caused by a backported patch". The fixes were backported to 6.17 then. I assumed that the stable maintainers would backport the fixes to all older kernels that were also affected, but Rosen brought it to my attention that it didn't happen. So I made the backports in the above branch. Rosen then decided to send them to the mailing list. Hope that helps clear up the situation. Thanks & best regards, Timur ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-04 9:09 ` Timur Kristóf @ 2026-03-04 10:06 ` Christian König 2026-03-04 12:44 ` Timur Kristóf 0 siblings, 1 reply; 11+ messages in thread From: Christian König @ 2026-03-04 10:06 UTC (permalink / raw) To: Timur Kristóf, Rosen Penev Cc: Harry Wentland, Leo Li, Alex Deucher, David Airlie, Simona Vetter, Kenneth Feng, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list On 3/4/26 10:09, Timur Kristóf wrote: > On Wednesday, March 4, 2026 9:10:02 AM Central European Standard Time > Christian König wrote: >> -stable +Greg >> >> On 3/4/26 05:03, Rosen Penev wrote: >>> On Fri, Feb 27, 2026 at 8:54 PM Rosen Penev <rosenp@gmail.com> wrote: >>>> Because of incomplete backports to stable kernels, DC ended up breaking >>>> on older GCN 1 GPUs. This patchset adds the missing upstream commits to >>>> at least fix the panic/black screen on boot. >>>> >>>> They are applicable to 6.12, 6.6, and 6.1 as those are the currently >>>> supported kernels that 7009e3af0474aca5f64262b3c72fb6e23b232f9b got >>>> backported to. >>>> >>>> 6.1 needs two extra backports for these two commits to be cherry-picked >>>> cleanly. Those are >>>> >>>> 96ce96f8773da4814622fd97e5226915a2c30706 >>>> d09ef243035b75a6d403ebfeb7e87fa20d7e25c6 >>>> >>>> v2: Add Signed-off-by. >>> >>> Do I need to resend? >> >> Well first of all please stop sending those patches at all. >> >> When you want something backported then add the CC: stable tag to the >> original patch. >> >> If you find that some patch is already upstream which isn't correctly tagged >> then ping the relevant maintainers if that patch can be backported. >> >> But don't send stuff to the stable list all by yourself. >> >> Regards, >> Christian. > > Hi Everyone, > > The patches actually come from a branch of mine: > https://gitlab.freedesktop.org/Venemo/linux/-/commits/v6.12.74_si_dc_fixes > > For context: > > The crash comes from a patch that I wrote for 6.18 that fixes some issues on > the default, non-DC code path, that was backported to stable kernels. DC was > not the default code path before Linux 6.19, so I didn't mark the patches that > also fix DC for backporting, because I had assumed nobody uses the DC code path > on these kernel versions. > > After a user reported to me that this causes issues for him with DC on 6.17 > and older kernels, I sent a backported series to Greg and Sasha, in an email > thread with the subject line "Fixing an amdgpu crash caused by a backported > patch". The fixes were backported to 6.17 then. > > I assumed that the stable maintainers would backport the fixes to all older > kernels that were also affected, but Rosen brought it to my attention that it > didn't happen. So I made the backports in the above branch. Rosen then decided > to send them to the mailing list. > > Hope that helps clear up the situation. Yeah that indeed helped me to understand the situation, thanks. In theory Harry an Leo should take care of stuff like this, but pretty much everybody is overworked. In that case guys feel free to go ahead and ping the stable maintainers that something is missing. Just make sure that when a patch passes through your hands that you add a Signed-off-by tag. Regards, Christian. > > Thanks & best regards, > Timur > > > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-04 10:06 ` Christian König @ 2026-03-04 12:44 ` Timur Kristóf 2026-03-04 22:23 ` Rosen Penev 0 siblings, 1 reply; 11+ messages in thread From: Timur Kristóf @ 2026-03-04 12:44 UTC (permalink / raw) To: Rosen Penev, Christian König Cc: Harry Wentland, Leo Li, Alex Deucher, David Airlie, Simona Vetter, Kenneth Feng, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list On Wednesday, March 4, 2026 11:06:53 AM Central European Standard Time Christian König wrote: > > > > Hi Everyone, > > > > The patches actually come from a branch of mine: > > https://gitlab.freedesktop.org/Venemo/linux/-/commits/v6.12.74_si_dc_fixes > > > > For context: > > > > The crash comes from a patch that I wrote for 6.18 that fixes some issues > > on the default, non-DC code path, that was backported to stable kernels. > > DC was not the default code path before Linux 6.19, so I didn't mark the > > patches that also fix DC for backporting, because I had assumed nobody > > uses the DC code path on these kernel versions. > > > > After a user reported to me that this causes issues for him with DC on > > 6.17 > > and older kernels, I sent a backported series to Greg and Sasha, in an > > email thread with the subject line "Fixing an amdgpu crash caused by a > > backported patch". The fixes were backported to 6.17 then. > > > > I assumed that the stable maintainers would backport the fixes to all > > older > > kernels that were also affected, but Rosen brought it to my attention that > > it didn't happen. So I made the backports in the above branch. Rosen then > > decided to send them to the mailing list. > > > > > Hope that helps clear up the situation. > Hi Christian, > In theory Harry an Leo should take care of stuff like this I don't blame them for this. It is my fault for breaking it in the first place, and I didn't think there was any interest in using DC on older kernels. > pretty much everybody is overworked. Yeah. We all are. > > In that case guys feel free to go ahead and ping the stable maintainers that > something is missing. > > Just make sure that when a patch passes through your hands that you add a > Signed-off-by tag. Thanks! Probably I should have sent the patches myself, then they already would have had all the necessary tags. Sorry for the confusion. Now that the situation is cleared up, is there anything else we need to do for these two patches here? Best regards, Timur ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-04 12:44 ` Timur Kristóf @ 2026-03-04 22:23 ` Rosen Penev 2026-03-08 13:15 ` Timur Kristóf 0 siblings, 1 reply; 11+ messages in thread From: Rosen Penev @ 2026-03-04 22:23 UTC (permalink / raw) To: Timur Kristóf Cc: Christian König, Harry Wentland, Leo Li, Alex Deucher, David Airlie, Simona Vetter, Kenneth Feng, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list On Wed, Mar 4, 2026 at 4:44 AM Timur Kristóf <timur.kristof@gmail.com> wrote: > > On Wednesday, March 4, 2026 11:06:53 AM Central European Standard Time > Christian König wrote: > > > > > > Hi Everyone, > > > > > > The patches actually come from a branch of mine: > > > https://gitlab.freedesktop.org/Venemo/linux/-/commits/v6.12.74_si_dc_fixes > > > > > > For context: > > > > > > The crash comes from a patch that I wrote for 6.18 that fixes some issues > > > on the default, non-DC code path, that was backported to stable kernels. > > > DC was not the default code path before Linux 6.19, so I didn't mark the > > > patches that also fix DC for backporting, because I had assumed nobody > > > uses the DC code path on these kernel versions. The DC code path just works bettter. So what if suspend is broken. I would much rather a working system. Hyprsunset for example doesn't work without DC. No idea why. Speaking of suspend, the fixes for it are fairly trivial to backport to 6.12 as well. > > > > > > After a user reported to me that this causes issues for him with DC on > > > 6.17 > > > and older kernels, That was me. > > > I sent a backported series to Greg and Sasha, in an > > > email thread with the subject line "Fixing an amdgpu crash caused by a > > > backported patch". The fixes were backported to 6.17 then. > > > > > > I assumed that the stable maintainers would backport the fixes to all > > > older > > > kernels that were also affected, but Rosen brought it to my attention that > > > it didn't happen. So I made the backports in the above branch. Rosen then > > > decided to send them to the mailing list. > > > > > > > Hope that helps clear up the situation. > > > > Hi Christian, > > > In theory Harry an Leo should take care of stuff like this > > I don't blame them for this. It is my fault for breaking it in the first place, > and I didn't think there was any interest in using DC on older kernels. > > > pretty much everybody is overworked. > > Yeah. We all are. > > > > > In that case guys feel free to go ahead and ping the stable maintainers that > > something is missing. > > > > Just make sure that when a patch passes through your hands that you add a > > Signed-off-by tag. > > Thanks! Probably I should have sent the patches myself, then they already > would have had all the necessary tags. Sorry for the confusion. > > Now that the situation is cleared up, is there anything else we need to do for > these two patches here? Speaking of which, it's probably best to take over here. It's not fun dealing with stable. I also didn't author these patches. > > Best regards, > Timur > > > ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-04 22:23 ` Rosen Penev @ 2026-03-08 13:15 ` Timur Kristóf 2026-03-19 11:53 ` Greg Kroah-Hartman 0 siblings, 1 reply; 11+ messages in thread From: Timur Kristóf @ 2026-03-08 13:15 UTC (permalink / raw) To: Rosen Penev Cc: Christian König, Harry Wentland, Leo Li, Alex Deucher, David Airlie, Simona Vetter, Kenneth Feng, Alex Hung, Greg Kroah-Hartman, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list On Wednesday, March 4, 2026 11:23:33 PM Central European Standard Time Rosen Penev wrote: > The DC code path just works bettter. So what if suspend is broken. I > would much rather a working system. Hyprsunset for example doesn't > work without DC. No idea why. It's great that it works better for you. Unfortunately that isn't the case for everyone. It wasn't feature complete until 6.19 so it wasn't a feasible default until then on these GPUs. Since 6.19 I would say it's pretty good now. > Speaking of suspend, the fixes for it are fairly trivial to backport > to 6.12 as well. Yes. I don't understand why those patches weren't backported. Like I said, I sent them for backporting to 6.17 many months ago and expected to see them backported to older kernels as well. > > Now that the situation is cleared up, is there anything else we need to do > > for these two patches here? > > Speaking of which, it's probably best to take over here. It's not fun > dealing with stable. I also didn't author these patches. The question was meant for Christian and Greg. What do I need to do to get these patches backported? Thanks & best regards, Timur ^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs 2026-03-08 13:15 ` Timur Kristóf @ 2026-03-19 11:53 ` Greg Kroah-Hartman 0 siblings, 0 replies; 11+ messages in thread From: Greg Kroah-Hartman @ 2026-03-19 11:53 UTC (permalink / raw) To: Timur Kristóf Cc: Rosen Penev, Christian König, Harry Wentland, Leo Li, Alex Deucher, David Airlie, Simona Vetter, Kenneth Feng, Alex Hung, Lijo Lazar, chr[], Sasha Levin, Wentao Liang, open list:AMD DISPLAY CORE, open list:DRM DRIVERS, open list On Sun, Mar 08, 2026 at 02:15:54PM +0100, Timur Kristóf wrote: > On Wednesday, March 4, 2026 11:23:33 PM Central European Standard Time Rosen > Penev wrote: > > The DC code path just works bettter. So what if suspend is broken. I > > would much rather a working system. Hyprsunset for example doesn't > > work without DC. No idea why. > > It's great that it works better for you. Unfortunately that isn't the case for > everyone. It wasn't feature complete until 6.19 so it wasn't a feasible > default until then on these GPUs. Since 6.19 I would say it's pretty good now. > > > Speaking of suspend, the fixes for it are fairly trivial to backport > > to 6.12 as well. > > Yes. I don't understand why those patches weren't backported. > Like I said, I sent them for backporting to 6.17 many months ago and expected > to see them backported to older kernels as well. > > > > Now that the situation is cleared up, is there anything else we need to do > > > for these two patches here? > > > > Speaking of which, it's probably best to take over here. It's not fun > > dealing with stable. I also didn't author these patches. > > The question was meant for Christian and Greg. > What do I need to do to get these patches backported? I've taken them now, thanks. greg k-h ^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2026-03-19 11:53 UTC | newest] Thread overview: 11+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2026-02-28 4:53 [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev 2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 1/2] drm/amd/display: Add pixel_clock to amd_pp_display_configuration Rosen Penev 2026-02-28 4:53 ` [PATCHv2 for 6.112 and 6.6 2/2] drm/amd/pm: Use pm_display_cfg in legacy DPM (v2) Rosen Penev 2026-03-04 4:03 ` [PATCHv2 for 6.112 and 6.6 0/2] amdgpu: fix panic on old GPUs Rosen Penev 2026-03-04 8:10 ` Christian König 2026-03-04 9:09 ` Timur Kristóf 2026-03-04 10:06 ` Christian König 2026-03-04 12:44 ` Timur Kristóf 2026-03-04 22:23 ` Rosen Penev 2026-03-08 13:15 ` Timur Kristóf 2026-03-19 11:53 ` Greg Kroah-Hartman
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox