From: "Lisovskiy, Stanislav" <stanislav.lisovskiy@intel.com>
To: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 07/11] drm/i915/audio : Consider fractional vdsc bpp while computing tu_data
Date: Mon, 5 Dec 2022 09:35:31 +0200 [thread overview]
Message-ID: <Y42fQ3LLtuILCzql@intel.com> (raw)
In-Reply-To: <20221128101922.217217-8-ankit.k.nautiyal@intel.com>
On Mon, Nov 28, 2022 at 03:49:18PM +0530, Ankit Nautiyal wrote:
> MTL+ supports fractional compressed bits_per_pixel, with precision of
> 1/16. This compressed bpp is stored in U6.4 format.
> Accommodate the precision during calculation of transfer unit data
> for hblank_early calculation.
>
> Signed-off-by: Ankit Nautiyal <ankit.k.nautiyal@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_audio.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_audio.c b/drivers/gpu/drm/i915/display/intel_audio.c
> index f63d5824aca2..4797040a6362 100644
> --- a/drivers/gpu/drm/i915/display/intel_audio.c
> +++ b/drivers/gpu/drm/i915/display/intel_audio.c
> @@ -510,14 +510,14 @@ static unsigned int calc_hblank_early_prog(struct intel_encoder *encoder,
> unsigned int link_clks_available, link_clks_required;
> unsigned int tu_data, tu_line, link_clks_active;
> unsigned int h_active, h_total, hblank_delta, pixel_clk;
> - unsigned int fec_coeff, cdclk, vdsc_bpp;
> + unsigned int fec_coeff, cdclk, vdsc_bppx16;
> unsigned int link_clk, lanes;
> unsigned int hblank_rise;
>
> h_active = crtc_state->hw.adjusted_mode.crtc_hdisplay;
> h_total = crtc_state->hw.adjusted_mode.crtc_htotal;
> pixel_clk = crtc_state->hw.adjusted_mode.crtc_clock;
> - vdsc_bpp = dsc_integral_compressed_bpp(crtc_state->dsc.compressed_bpp);
> + vdsc_bppx16 = crtc_state->dsc.compressed_bpp;
> cdclk = i915->display.cdclk.hw.cdclk;
> /* fec= 0.972261, using rounding multiplier of 1000000 */
> fec_coeff = 972261;
> @@ -525,10 +525,10 @@ static unsigned int calc_hblank_early_prog(struct intel_encoder *encoder,
> lanes = crtc_state->lane_count;
>
> drm_dbg_kms(&i915->drm, "h_active = %u link_clk = %u :"
> - "lanes = %u vdsc_bpp = %u cdclk = %u\n",
> - h_active, link_clk, lanes, vdsc_bpp, cdclk);
> + "lanes = %u vdsc_bppx16 = %u cdclk = %u\n",
> + h_active, link_clk, lanes, vdsc_bppx16, cdclk);
>
> - if (WARN_ON(!link_clk || !pixel_clk || !lanes || !vdsc_bpp || !cdclk))
> + if (WARN_ON(!link_clk || !pixel_clk || !lanes || !vdsc_bppx16 || !cdclk))
> return 0;
>
> link_clks_available = (h_total - h_active) * link_clk / pixel_clk - 28;
> @@ -540,7 +540,7 @@ static unsigned int calc_hblank_early_prog(struct intel_encoder *encoder,
> hblank_delta = DIV64_U64_ROUND_UP(mul_u32_u32(5 * (link_clk + cdclk), pixel_clk),
> mul_u32_u32(link_clk, cdclk));
>
> - tu_data = div64_u64(mul_u32_u32(pixel_clk * vdsc_bpp * 8, 1000000),
> + tu_data = div64_u64(mul_u32_u32(pixel_clk * vdsc_bppx16 * 8, 16 * 1000000),
I think it should be:
tu_data = div64_u64(mul_u32_u32(pixel_clk * vdsc_bppx16 * 8, 1000000),
mul_u32_u32(link_clk * lanes * 16, fec_coeff));
i.e you need to divide by 16 but not multiply, because vdsc_bppx16 already
stores vdsc_bpp multiplied by 16, which is visible from the logs,
during testing it was for example 384 for bpp 24, so no point in multiplying
it once again.
Stan
> mul_u32_u32(link_clk * lanes, fec_coeff));
> tu_line = div64_u64(h_active * mul_u32_u32(link_clk, fec_coeff),
> mul_u32_u32(64 * pixel_clk, 1000000));
> --
> 2.25.1
>
next prev parent reply other threads:[~2022-12-05 7:35 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-28 10:19 [Intel-gfx] [PATCH 00/11] Add DSC fractional bpp support Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 01/11] drm/i915/dp: Check if force dsc bpc <= max requested bpc Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 02/11] drm/display/dp: Add helper function to get DSC bpp prescision Ankit Nautiyal
2022-11-28 15:27 ` kernel test robot
2022-11-28 10:19 ` [Intel-gfx] [PATCH 03/11] drm/i915/dp: Rename helpers to get DSC max pipe bpp and max output bpp Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 04/11] drm/i915/dp: Get optimal link config to have best compressed bpp Ankit Nautiyal
2022-12-05 7:28 ` Lisovskiy, Stanislav
2022-12-06 10:15 ` Nautiyal, Ankit K
2022-11-28 10:19 ` [Intel-gfx] [PATCH 05/11] drm/i915/display: Store compressed bpp in U6.4 format Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 06/11] drm/i915/display: Consider fractional vdsc bpp while computing m_n values Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 07/11] drm/i915/audio : Consider fractional vdsc bpp while computing tu_data Ankit Nautiyal
2022-12-05 7:35 ` Lisovskiy, Stanislav [this message]
2022-12-06 10:19 ` Nautiyal, Ankit K
2022-11-28 10:19 ` [Intel-gfx] [PATCH 08/11] drm/i915/dsc/mtl: Add support for fractional bpp Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 09/11] drm/i915/dp: Iterate over output bpp with fractional step size Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 10/11] drm/i915/dsc: Add debugfs entry to validate DSC fractional bpp Ankit Nautiyal
2022-11-28 10:19 ` [Intel-gfx] [PATCH 11/11] drm/i915/dsc: Allow DSC only with fractional bpp when forced from debugfs Ankit Nautiyal
2022-11-28 12:45 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Add DSC fractional bpp support Patchwork
2022-11-28 12:45 ` [Intel-gfx] ✗ Fi.CI.DOCS: " Patchwork
2022-11-28 13:04 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-11-28 16:57 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y42fQ3LLtuILCzql@intel.com \
--to=stanislav.lisovskiy@intel.com \
--cc=ankit.k.nautiyal@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox