From: Imre Deak <imre.deak@intel.com>
To: Jani Nikula <jani.nikula@intel.com>
Cc: intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Subject: Re: [PATCH 04/14] drm/i915/dp: Pass .4 BPP values to {icl,xelpd}_dsc_compute_link_config()
Date: Fri, 31 Jan 2025 16:05:19 +0200 [thread overview]
Message-ID: <Z5zYn5XGX3wfyjia@ideak-desk.fi.intel.com> (raw)
In-Reply-To: <e72f153fd28755e41ee8c5a7b9e6de257c3b27ac.1738327620.git.jani.nikula@intel.com>
On Fri, Jan 31, 2025 at 02:49:57PM +0200, Jani Nikula wrote:
> Try to keep the variables in the same domain a bit longer to reduce
> juggling between integers and .4 fixed point. Change parameter order to
> min, max while at it.
>
> For now, keep the juggling in dsc_compute_compressed_bpp() ensure
> min/max will always have 0 fractional part. To be fixed later.
>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_dp.c | 28 ++++++++++++++-----------
> 1 file changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 02d1a5453b46..b13d806c9de7 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2077,8 +2077,8 @@ static int
> icl_dsc_compute_link_config(struct intel_dp *intel_dp,
> struct intel_crtc_state *pipe_config,
> const struct link_config_limits *limits,
> - int dsc_max_bpp,
> - int dsc_min_bpp,
> + int min_bpp_x16,
> + int max_bpp_x16,
> int pipe_bpp,
> int timeslots)
> {
> @@ -2086,11 +2086,11 @@ icl_dsc_compute_link_config(struct intel_dp *intel_dp,
> int output_bpp = intel_dp_output_bpp(pipe_config->output_format, pipe_bpp);
>
> /* Compressed BPP should be less than the Input DSC bpp */
> - dsc_max_bpp = min(dsc_max_bpp, output_bpp - 1);
> + max_bpp_x16 = min(max_bpp_x16, fxp_q4_from_int(output_bpp - 1));
>
> for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
> - if (valid_dsc_bpp[i] < dsc_min_bpp ||
> - valid_dsc_bpp[i] > dsc_max_bpp)
> + if (valid_dsc_bpp[i] < fxp_q4_to_int(min_bpp_x16) ||
> + valid_dsc_bpp[i] > fxp_q4_to_int(max_bpp_x16))
> continue;
>
> ret = dsc_compute_link_config(intel_dp,
> @@ -2119,8 +2119,8 @@ xelpd_dsc_compute_link_config(struct intel_dp *intel_dp,
> const struct intel_connector *connector,
> struct intel_crtc_state *pipe_config,
> const struct link_config_limits *limits,
> - int dsc_max_bpp,
> - int dsc_min_bpp,
> + int min_bpp_x16,
> + int max_bpp_x16,
> int pipe_bpp,
> int timeslots)
> {
> @@ -2132,10 +2132,9 @@ xelpd_dsc_compute_link_config(struct intel_dp *intel_dp,
> bpp_step_x16 = intel_dp_dsc_bpp_step_x16(connector);
>
> /* Compressed BPP should be less than the Input DSC bpp */
> - dsc_max_bpp = min(dsc_max_bpp << 4, (output_bpp << 4) - bpp_step_x16);
> - dsc_min_bpp = dsc_min_bpp << 4;
> + max_bpp_x16 = min(max_bpp_x16, fxp_q4_from_int(output_bpp) - bpp_step_x16);
>
> - for (bpp_x16 = dsc_max_bpp; bpp_x16 >= dsc_min_bpp; bpp_x16 -= bpp_step_x16) {
> + for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16 -= bpp_step_x16) {
> if (intel_dp->force_dsc_fractional_bpp_en &&
> !fxp_q4_to_frac(bpp_x16))
> continue;
> @@ -2168,6 +2167,7 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
> const struct drm_display_mode *adjusted_mode = &pipe_config->hw.adjusted_mode;
> int dsc_min_bpp;
> int dsc_max_bpp;
> + int min_bpp_x16, max_bpp_x16;
> int dsc_joiner_max_bpp;
> int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
>
> @@ -2178,11 +2178,15 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
> num_joined_pipes);
> dsc_max_bpp = min(dsc_joiner_max_bpp, fxp_q4_to_int(limits->link.max_bpp_x16));
>
> + /* FIXME: remove the round trip via integers */
> + min_bpp_x16 = fxp_q4_from_int(dsc_min_bpp);
> + max_bpp_x16 = fxp_q4_from_int(dsc_max_bpp);
> +
> if (DISPLAY_VER(display) >= 13)
> return xelpd_dsc_compute_link_config(intel_dp, connector, pipe_config, limits,
> - dsc_max_bpp, dsc_min_bpp, pipe_bpp, timeslots);
> + min_bpp_x16, max_bpp_x16, pipe_bpp, timeslots);
> return icl_dsc_compute_link_config(intel_dp, pipe_config, limits,
> - dsc_max_bpp, dsc_min_bpp, pipe_bpp, timeslots);
> + min_bpp_x16, max_bpp_x16, pipe_bpp, timeslots);
> }
>
> int intel_dp_dsc_min_src_input_bpc(void)
> --
> 2.39.5
>
next prev parent reply other threads:[~2025-01-31 14:04 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-31 12:49 [PATCH 00/14] drm/i915/dp: dsc fix, refactoring and cleanups Jani Nikula
2025-01-31 12:49 ` [PATCH 01/14] drm/i915/dp: Iterate DSC BPP from high to low on all platforms Jani Nikula
2025-01-31 13:32 ` Imre Deak
2025-02-03 14:46 ` Jani Nikula
2025-01-31 16:13 ` Nautiyal, Ankit K
2025-01-31 12:49 ` [PATCH 02/14] drm/i915/dp: Add intel_dp_dsc_bpp_step_x16() helper to get DSC BPP precision Jani Nikula
2025-01-31 13:45 ` Imre Deak
2025-01-31 14:06 ` Jani Nikula
2025-01-31 23:28 ` [PATCH v2] " Jani Nikula
2025-01-31 12:49 ` [PATCH 03/14] drm/i915/dp: Rename some variables in xelpd_dsc_compute_link_config() Jani Nikula
2025-01-31 13:57 ` Imre Deak
2025-01-31 12:49 ` [PATCH 04/14] drm/i915/dp: Pass .4 BPP values to {icl, xelpd}_dsc_compute_link_config() Jani Nikula
2025-01-31 14:05 ` Imre Deak [this message]
2025-01-31 12:49 ` [PATCH 05/14] drm/i915/dp: Move max DSC BPP reduction one level higher Jani Nikula
2025-01-31 14:26 ` Imre Deak
2025-01-31 12:49 ` [PATCH 06/14] drm/i915/dp: Change icl_dsc_compute_link_config() DSC BPP iteration Jani Nikula
2025-01-31 14:30 ` Imre Deak
2025-01-31 12:50 ` [PATCH 07/14] drm/i915/dp: Move force_dsc_fractional_bpp_en check to intel_dp_dsc_valid_bpp() Jani Nikula
2025-01-31 14:32 ` Imre Deak
2025-01-31 12:50 ` [PATCH 08/14] drm/i915/dp: Unify DSC link config functions Jani Nikula
2025-01-31 14:35 ` Imre Deak
2025-01-31 12:50 ` [PATCH 09/14] drm/i915/dp: Inline do_dsc_compute_compressed_bpp() Jani Nikula
2025-01-31 14:48 ` Imre Deak
2025-01-31 12:50 ` [PATCH 10/14] drm/i915/dp: Simplify input BPP checks in intel_dp_dsc_compute_pipe_bpp() Jani Nikula
2025-01-31 14:52 ` Imre Deak
2025-01-31 12:50 ` [PATCH 11/14] drm/i915/dp: Use int for compressed BPP in dsc_compute_link_config() Jani Nikula
2025-01-31 15:08 ` Imre Deak
2025-01-31 15:27 ` Imre Deak
2025-01-31 12:50 ` [PATCH 12/14] drm/i915/dp: Drop compute_pipe_bpp parameter from intel_dp_dsc_compute_config() Jani Nikula
2025-01-31 15:10 ` Imre Deak
2025-01-31 12:50 ` [PATCH 13/14] drm/i915/dp: Pass connector state all the way to dsc_compute_link_config() Jani Nikula
2025-01-31 15:38 ` Imre Deak
2025-01-31 12:50 ` [PATCH 14/14] drm/i915/mst: Convert intel_dp_mtp_tu_compute_config() to .4 format Jani Nikula
2025-01-31 15:46 ` Imre Deak
2025-01-31 12:57 ` ✓ CI.Patch_applied: success for drm/i915/dp: dsc fix, refactoring and cleanups Patchwork
2025-01-31 12:57 ` ✗ CI.checkpatch: warning " Patchwork
2025-01-31 12:58 ` ✓ CI.KUnit: success " Patchwork
2025-01-31 13:15 ` ✓ CI.Build: " Patchwork
2025-01-31 13:17 ` ✓ CI.Hooks: " Patchwork
2025-01-31 13:18 ` ✗ CI.checksparse: warning " Patchwork
2025-01-31 13:38 ` ✓ Xe.CI.BAT: success " Patchwork
2025-01-31 17:34 ` ✗ Xe.CI.Full: failure " Patchwork
2025-02-01 0:24 ` ✗ CI.Patch_applied: failure for drm/i915/dp: dsc fix, refactoring and cleanups (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5zYn5XGX3wfyjia@ideak-desk.fi.intel.com \
--to=imre.deak@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox