AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Harry Wentland <harry.wentland@amd.com>
To: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>,
	amd-gfx@lists.freedesktop.org
Cc: Alex Deucher <alexander.deucher@amd.com>,
	Stephen Rothwell <sfr@canb.auug.org.au>,
	Aurabindo Pillai <aurabindo.pillai@amd.com>,
	Hamza Mahfooz <hamza.mahfooz@amd.com>
Subject: Re: [PATCH 4/6] drm/amd/display: Reduce frame size in the bouding box for DCN31/316
Date: Mon, 6 Jun 2022 10:10:29 -0400	[thread overview]
Message-ID: <b6d6fb40-c4be-89ef-cc9e-28f7a21750e2@amd.com> (raw)
In-Reply-To: <20220603185042.3408844-5-Rodrigo.Siqueira@amd.com>

On 2022-06-03 14:50, Rodrigo Siqueira wrote:
> GCC throw warnings for the function dcn31_update_bw_bounding_box and
> dcn316_update_bw_bounding_box due to its frame size that looks like
> this:
> 
>  error: the frame size of 1936 bytes is larger than 1024 bytes [-Werror=frame-larger-than=]
> 
> For fixing this issue I dropped an intermadiate variable.
> 
> Cc: Stephen Rothwell <sfr@canb.auug.org.au>
> Cc: Hamza Mahfooz <hamza.mahfooz@amd.com>
> Cc: Aurabindo Pillai <aurabindo.pillai@amd.com>
> Signed-off-by: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>

Reviewed-by: Harry Wentland <harry.wentland@amd.com>

Harry

> ---
>  .../drm/amd/display/dc/dml/dcn31/dcn31_fpu.c  | 58 +++++++++----------
>  1 file changed, 26 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> index 54db2eca9e6b..ee898bc93fd5 100644
> --- a/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> +++ b/drivers/gpu/drm/amd/display/dc/dml/dcn31/dcn31_fpu.c
> @@ -574,7 +574,6 @@ void dcn31_calculate_wm_and_dlg_fp(
>  void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
>  {
>  	struct clk_limit_table *clk_table = &bw_params->clk_table;
> -	struct _vcs_dpi_voltage_scaling_st clock_limits[DC__VOLTAGE_STATES];
>  	unsigned int i, closest_clk_lvl;
>  	int j;
>  
> @@ -607,29 +606,27 @@ void dcn31_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params
>  				}
>  			}
>  
> -			clock_limits[i].state = i;
> +			dcn3_1_soc.clock_limits[i].state = i;
>  
>  			/* Clocks dependent on voltage level. */
> -			clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;
> -			clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz;
> -			clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz;
> -			clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2 * clk_table->entries[i].wck_ratio;
> +			dcn3_1_soc.clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;
> +			dcn3_1_soc.clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz;
> +			dcn3_1_soc.clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz;
> +			dcn3_1_soc.clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2 * clk_table->entries[i].wck_ratio;
>  
>  			/* Clocks independent of voltage level. */
> -			clock_limits[i].dispclk_mhz = max_dispclk_mhz ? max_dispclk_mhz :
> +			dcn3_1_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz ? max_dispclk_mhz :
>  				dcn3_1_soc.clock_limits[closest_clk_lvl].dispclk_mhz;
>  
> -			clock_limits[i].dppclk_mhz = max_dppclk_mhz ? max_dppclk_mhz :
> +			dcn3_1_soc.clock_limits[i].dppclk_mhz = max_dppclk_mhz ? max_dppclk_mhz :
>  				dcn3_1_soc.clock_limits[closest_clk_lvl].dppclk_mhz;
>  
> -			clock_limits[i].dram_bw_per_chan_gbps = dcn3_1_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;
> -			clock_limits[i].dscclk_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].dscclk_mhz;
> -			clock_limits[i].dtbclk_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].dtbclk_mhz;
> -			clock_limits[i].phyclk_d18_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz;
> -			clock_limits[i].phyclk_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].phyclk_mhz;
> +			dcn3_1_soc.clock_limits[i].dram_bw_per_chan_gbps = dcn3_1_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;
> +			dcn3_1_soc.clock_limits[i].dscclk_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].dscclk_mhz;
> +			dcn3_1_soc.clock_limits[i].dtbclk_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].dtbclk_mhz;
> +			dcn3_1_soc.clock_limits[i].phyclk_d18_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz;
> +			dcn3_1_soc.clock_limits[i].phyclk_mhz = dcn3_1_soc.clock_limits[closest_clk_lvl].phyclk_mhz;
>  		}
> -		for (i = 0; i < clk_table->num_entries; i++)
> -			dcn3_1_soc.clock_limits[i] = clock_limits[i];
>  		if (clk_table->num_entries) {
>  			dcn3_1_soc.num_states = clk_table->num_entries;
>  		}
> @@ -701,7 +698,6 @@ void dcn315_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_param
>  void dcn316_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_params)
>  {
>  	struct clk_limit_table *clk_table = &bw_params->clk_table;
> -	struct _vcs_dpi_voltage_scaling_st clock_limits[DC__VOLTAGE_STATES];
>  	unsigned int i, closest_clk_lvl;
>  	int max_dispclk_mhz = 0, max_dppclk_mhz = 0;
>  	int j;
> @@ -739,34 +735,32 @@ void dcn316_update_bw_bounding_box(struct dc *dc, struct clk_bw_params *bw_param
>  				closest_clk_lvl = dcn3_16_soc.num_states - 1;
>  			}
>  
> -			clock_limits[i].state = i;
> +			dcn3_16_soc.clock_limits[i].state = i;
>  
>  			/* Clocks dependent on voltage level. */
> -			clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;
> +			dcn3_16_soc.clock_limits[i].dcfclk_mhz = clk_table->entries[i].dcfclk_mhz;
>  			if (clk_table->num_entries == 1 &&
> -				clock_limits[i].dcfclk_mhz < dcn3_16_soc.clock_limits[closest_clk_lvl].dcfclk_mhz) {
> +				dcn3_16_soc.clock_limits[i].dcfclk_mhz < dcn3_16_soc.clock_limits[closest_clk_lvl].dcfclk_mhz) {
>  				/*SMU fix not released yet*/
> -				clock_limits[i].dcfclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].dcfclk_mhz;
> +				dcn3_16_soc.clock_limits[i].dcfclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].dcfclk_mhz;
>  			}
> -			clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz;
> -			clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz;
> -			clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2 * clk_table->entries[i].wck_ratio;
> +			dcn3_16_soc.clock_limits[i].fabricclk_mhz = clk_table->entries[i].fclk_mhz;
> +			dcn3_16_soc.clock_limits[i].socclk_mhz = clk_table->entries[i].socclk_mhz;
> +			dcn3_16_soc.clock_limits[i].dram_speed_mts = clk_table->entries[i].memclk_mhz * 2 * clk_table->entries[i].wck_ratio;
>  
>  			/* Clocks independent of voltage level. */
> -			clock_limits[i].dispclk_mhz = max_dispclk_mhz ? max_dispclk_mhz :
> +			dcn3_16_soc.clock_limits[i].dispclk_mhz = max_dispclk_mhz ? max_dispclk_mhz :
>  				dcn3_16_soc.clock_limits[closest_clk_lvl].dispclk_mhz;
>  
> -			clock_limits[i].dppclk_mhz = max_dppclk_mhz ? max_dppclk_mhz :
> +			dcn3_16_soc.clock_limits[i].dppclk_mhz = max_dppclk_mhz ? max_dppclk_mhz :
>  				dcn3_16_soc.clock_limits[closest_clk_lvl].dppclk_mhz;
>  
> -			clock_limits[i].dram_bw_per_chan_gbps = dcn3_16_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;
> -			clock_limits[i].dscclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].dscclk_mhz;
> -			clock_limits[i].dtbclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].dtbclk_mhz;
> -			clock_limits[i].phyclk_d18_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz;
> -			clock_limits[i].phyclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].phyclk_mhz;
> +			dcn3_16_soc.clock_limits[i].dram_bw_per_chan_gbps = dcn3_16_soc.clock_limits[closest_clk_lvl].dram_bw_per_chan_gbps;
> +			dcn3_16_soc.clock_limits[i].dscclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].dscclk_mhz;
> +			dcn3_16_soc.clock_limits[i].dtbclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].dtbclk_mhz;
> +			dcn3_16_soc.clock_limits[i].phyclk_d18_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].phyclk_d18_mhz;
> +			dcn3_16_soc.clock_limits[i].phyclk_mhz = dcn3_16_soc.clock_limits[closest_clk_lvl].phyclk_mhz;
>  		}
> -		for (i = 0; i < clk_table->num_entries; i++)
> -			dcn3_16_soc.clock_limits[i] = clock_limits[i];
>  		if (clk_table->num_entries) {
>  			dcn3_16_soc.num_states = clk_table->num_entries;
>  		}


  reply	other threads:[~2022-06-06 14:11 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-03 18:50 [PATCH 0/6] Cleaning up some GCC warnings and other minor issues Rodrigo Siqueira
2022-06-03 18:50 ` [PATCH 1/6] drm/amd/display: Remove duplicated macro Rodrigo Siqueira
2022-06-06 14:04   ` Harry Wentland
2022-06-03 18:50 ` [PATCH 2/6] drm/amd/display: Reduce frame size in the bouding box for DCN20 Rodrigo Siqueira
2022-06-06 14:05   ` Harry Wentland
2022-06-03 18:50 ` [PATCH 3/6] drm/amd/display: Reduce frame size in the bouding box for DCN301 Rodrigo Siqueira
2022-06-06 14:08   ` Harry Wentland
2022-06-03 18:50 ` [PATCH 4/6] drm/amd/display: Reduce frame size in the bouding box for DCN31/316 Rodrigo Siqueira
2022-06-06 14:10   ` Harry Wentland [this message]
2022-06-03 18:50 ` [PATCH 5/6] drm/amd/display: Reduce frame size in the bouding box for DCN21 Rodrigo Siqueira
2022-06-06 14:11   ` Harry Wentland
2022-06-03 18:50 ` [PATCH 6/6] Revert "drm/amd/display: Drop unnecessary guard from DC resource" Rodrigo Siqueira
2022-06-06 14:16   ` Harry Wentland
2022-06-06 16:17     ` Alex Deucher
2022-06-06 18:01       ` Harry Wentland
2022-06-06 18:04         ` Alex Deucher
2022-06-03 19:40 ` [PATCH 0/6] Cleaning up some GCC warnings and other minor issues Alex Deucher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b6d6fb40-c4be-89ef-cc9e-28f7a21750e2@amd.com \
    --to=harry.wentland@amd.com \
    --cc=Rodrigo.Siqueira@amd.com \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=aurabindo.pillai@amd.com \
    --cc=hamza.mahfooz@amd.com \
    --cc=sfr@canb.auug.org.au \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox