From: Imre Deak <imre.deak@intel.com>
To: Jani Nikula <jani.nikula@intel.com>
Cc: intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Subject: Re: [PATCH 09/14] drm/i915/dp: Inline do_dsc_compute_compressed_bpp()
Date: Fri, 31 Jan 2025 16:48:53 +0200 [thread overview]
Message-ID: <Z5zi1VMgCFFBltNs@ideak-desk.fi.intel.com> (raw)
In-Reply-To: <91ae42cbdffe4938a665667955c577f887b92b9d.1738327620.git.jani.nikula@intel.com>
On Fri, Jan 31, 2025 at 02:50:02PM +0200, Jani Nikula wrote:
> With just the one platform independent loop left in
> do_dsc_compute_compressed_bpp(), we don't really need the extra function
> that is simply becoming increasingly hard to even figure out a decent
> name for. Just merge the whole thing to
> dsc_compute_compressed_bpp(). Good riddance to the short lived
> do_dsc_compute_compressed_bpp().
>
> Signed-off-by: Jani Nikula <jani.nikula@intel.com>
It also makes sense to keep all functions short, but here having to pass
a lot of parameters to do_dsc_compute_compressed_bpp() could argue
against that:
Reviewed-by: Imre Deak <imre.deak@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_dp.c | 60 ++++++++++---------------
> 1 file changed, 23 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
> index 11a1ac28e21e..185c9f7e8538 100644
> --- a/drivers/gpu/drm/i915/display/intel_dp.c
> +++ b/drivers/gpu/drm/i915/display/intel_dp.c
> @@ -2097,41 +2097,6 @@ static bool intel_dp_dsc_valid_bpp(struct intel_dp *intel_dp, int bpp_x16)
> * Find the max compressed BPP we can find a link configuration for. The BPPs to
> * try depend on the source (platform) and sink.
> */
> -static int
> -do_dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
> - struct intel_crtc_state *pipe_config,
> - const struct link_config_limits *limits,
> - int min_bpp_x16,
> - int max_bpp_x16,
> - int bpp_step_x16,
> - int timeslots)
> -{
> - struct intel_display *display = to_intel_display(intel_dp);
> - int bpp_x16;
> - int ret;
> -
> - for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16 -= bpp_step_x16) {
> - if (!intel_dp_dsc_valid_bpp(intel_dp, bpp_x16))
> - continue;
> -
> - ret = dsc_compute_link_config(intel_dp,
> - pipe_config,
> - limits,
> - bpp_x16,
> - timeslots);
> - if (ret == 0) {
> - pipe_config->dsc.compressed_bpp_x16 = bpp_x16;
> - if (intel_dp->force_dsc_fractional_bpp_en &&
> - fxp_q4_to_frac(bpp_x16))
> - drm_dbg_kms(display->drm,
> - "Forcing DSC fractional bpp\n");
> -
> - return 0;
> - }
> - }
> - return -EINVAL;
> -}
> -
> static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
> const struct intel_connector *connector,
> struct intel_crtc_state *pipe_config,
> @@ -2147,6 +2112,8 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
> int min_bpp_x16, max_bpp_x16, bpp_step_x16;
> int dsc_joiner_max_bpp;
> int num_joined_pipes = intel_crtc_num_joined_pipes(pipe_config);
> + int bpp_x16;
> + int ret;
>
> dsc_min_bpp = fxp_q4_to_int_roundup(limits->link.min_bpp_x16);
>
> @@ -2165,8 +2132,27 @@ static int dsc_compute_compressed_bpp(struct intel_dp *intel_dp,
> output_bpp = intel_dp_output_bpp(pipe_config->output_format, pipe_bpp);
> max_bpp_x16 = min(max_bpp_x16, fxp_q4_from_int(output_bpp) - bpp_step_x16);
>
> - return do_dsc_compute_compressed_bpp(intel_dp, pipe_config, limits,
> - min_bpp_x16, max_bpp_x16, bpp_step_x16, timeslots);
> + for (bpp_x16 = max_bpp_x16; bpp_x16 >= min_bpp_x16; bpp_x16 -= bpp_step_x16) {
> + if (!intel_dp_dsc_valid_bpp(intel_dp, bpp_x16))
> + continue;
> +
> + ret = dsc_compute_link_config(intel_dp,
> + pipe_config,
> + limits,
> + bpp_x16,
> + timeslots);
> + if (ret == 0) {
> + pipe_config->dsc.compressed_bpp_x16 = bpp_x16;
> + if (intel_dp->force_dsc_fractional_bpp_en &&
> + fxp_q4_to_frac(bpp_x16))
> + drm_dbg_kms(display->drm,
> + "Forcing DSC fractional bpp\n");
> +
> + return 0;
> + }
> + }
> +
> + return -EINVAL;
> }
>
> int intel_dp_dsc_min_src_input_bpc(void)
> --
> 2.39.5
>
next prev parent reply other threads:[~2025-01-31 14:48 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-31 12:49 [PATCH 00/14] drm/i915/dp: dsc fix, refactoring and cleanups Jani Nikula
2025-01-31 12:49 ` [PATCH 01/14] drm/i915/dp: Iterate DSC BPP from high to low on all platforms Jani Nikula
2025-01-31 13:32 ` Imre Deak
2025-02-03 14:46 ` Jani Nikula
2025-01-31 16:13 ` Nautiyal, Ankit K
2025-01-31 12:49 ` [PATCH 02/14] drm/i915/dp: Add intel_dp_dsc_bpp_step_x16() helper to get DSC BPP precision Jani Nikula
2025-01-31 13:45 ` Imre Deak
2025-01-31 14:06 ` Jani Nikula
2025-01-31 23:28 ` [PATCH v2] " Jani Nikula
2025-01-31 12:49 ` [PATCH 03/14] drm/i915/dp: Rename some variables in xelpd_dsc_compute_link_config() Jani Nikula
2025-01-31 13:57 ` Imre Deak
2025-01-31 12:49 ` [PATCH 04/14] drm/i915/dp: Pass .4 BPP values to {icl, xelpd}_dsc_compute_link_config() Jani Nikula
2025-01-31 14:05 ` [PATCH 04/14] drm/i915/dp: Pass .4 BPP values to {icl,xelpd}_dsc_compute_link_config() Imre Deak
2025-01-31 12:49 ` [PATCH 05/14] drm/i915/dp: Move max DSC BPP reduction one level higher Jani Nikula
2025-01-31 14:26 ` Imre Deak
2025-01-31 12:49 ` [PATCH 06/14] drm/i915/dp: Change icl_dsc_compute_link_config() DSC BPP iteration Jani Nikula
2025-01-31 14:30 ` Imre Deak
2025-01-31 12:50 ` [PATCH 07/14] drm/i915/dp: Move force_dsc_fractional_bpp_en check to intel_dp_dsc_valid_bpp() Jani Nikula
2025-01-31 14:32 ` Imre Deak
2025-01-31 12:50 ` [PATCH 08/14] drm/i915/dp: Unify DSC link config functions Jani Nikula
2025-01-31 14:35 ` Imre Deak
2025-01-31 12:50 ` [PATCH 09/14] drm/i915/dp: Inline do_dsc_compute_compressed_bpp() Jani Nikula
2025-01-31 14:48 ` Imre Deak [this message]
2025-01-31 12:50 ` [PATCH 10/14] drm/i915/dp: Simplify input BPP checks in intel_dp_dsc_compute_pipe_bpp() Jani Nikula
2025-01-31 14:52 ` Imre Deak
2025-01-31 12:50 ` [PATCH 11/14] drm/i915/dp: Use int for compressed BPP in dsc_compute_link_config() Jani Nikula
2025-01-31 15:08 ` Imre Deak
2025-01-31 15:27 ` Imre Deak
2025-01-31 12:50 ` [PATCH 12/14] drm/i915/dp: Drop compute_pipe_bpp parameter from intel_dp_dsc_compute_config() Jani Nikula
2025-01-31 15:10 ` Imre Deak
2025-01-31 12:50 ` [PATCH 13/14] drm/i915/dp: Pass connector state all the way to dsc_compute_link_config() Jani Nikula
2025-01-31 15:38 ` Imre Deak
2025-01-31 12:50 ` [PATCH 14/14] drm/i915/mst: Convert intel_dp_mtp_tu_compute_config() to .4 format Jani Nikula
2025-01-31 15:46 ` Imre Deak
2025-01-31 12:57 ` ✓ CI.Patch_applied: success for drm/i915/dp: dsc fix, refactoring and cleanups Patchwork
2025-01-31 12:57 ` ✗ CI.checkpatch: warning " Patchwork
2025-01-31 12:58 ` ✓ CI.KUnit: success " Patchwork
2025-01-31 13:15 ` ✓ CI.Build: " Patchwork
2025-01-31 13:17 ` ✓ CI.Hooks: " Patchwork
2025-01-31 13:18 ` ✗ CI.checksparse: warning " Patchwork
2025-01-31 13:38 ` ✓ Xe.CI.BAT: success " Patchwork
2025-01-31 17:34 ` ✗ Xe.CI.Full: failure " Patchwork
2025-02-01 0:24 ` ✗ CI.Patch_applied: failure for drm/i915/dp: dsc fix, refactoring and cleanups (rev2) Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5zi1VMgCFFBltNs@ideak-desk.fi.intel.com \
--to=imre.deak@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox