From: Ville Syrjala <ville.syrjala@linux.intel.com>
To: intel-gfx@lists.freedesktop.org
Cc: intel-xe@lists.freedesktop.org
Subject: [PATCH 01/14] drm/i915: Drop the cached per-pipe min_cdclk[] from bw state
Date: Fri, 7 Mar 2025 20:01:26 +0200 [thread overview]
Message-ID: <20250307180139.15744-2-ville.syrjala@linux.intel.com> (raw)
In-Reply-To: <20250307180139.15744-1-ville.syrjala@linux.intel.com>
From: Ville Syrjälä <ville.syrjala@linux.intel.com>
intel_bw_crtc_min_cdclk() only depends on the pipe data rate,
which we already have stashed in bw_state->data_rate[]. So
stashing the resulting min_cdclk[] as well is redundant. Get
rid of it.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
---
drivers/gpu/drm/i915/display/intel_bw.c | 17 +++++++----------
drivers/gpu/drm/i915/display/intel_bw.h | 1 -
2 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_bw.c b/drivers/gpu/drm/i915/display/intel_bw.c
index 048be2872247..7b9ae926c5c4 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.c
+++ b/drivers/gpu/drm/i915/display/intel_bw.c
@@ -795,15 +795,13 @@ static unsigned int intel_bw_crtc_data_rate(const struct intel_crtc_state *crtc_
}
/* "Maximum Pipe Read Bandwidth" */
-static int intel_bw_crtc_min_cdclk(const struct intel_crtc_state *crtc_state)
+static int intel_bw_crtc_min_cdclk(struct drm_i915_private *i915,
+ unsigned int data_rate)
{
- struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
- struct drm_i915_private *i915 = to_i915(crtc->base.dev);
-
if (DISPLAY_VER(i915) < 12)
return 0;
- return DIV_ROUND_UP_ULL(mul_u32_u32(intel_bw_crtc_data_rate(crtc_state), 10), 512);
+ return DIV_ROUND_UP_ULL(mul_u32_u32(data_rate, 10), 512);
}
static unsigned int intel_bw_num_active_planes(struct drm_i915_private *dev_priv,
@@ -1138,7 +1136,8 @@ static bool intel_bw_state_changed(struct drm_i915_private *i915,
return true;
}
- if (old_bw_state->min_cdclk[pipe] != new_bw_state->min_cdclk[pipe])
+ if (intel_bw_crtc_min_cdclk(i915, old_bw_state->data_rate[pipe]) !=
+ intel_bw_crtc_min_cdclk(i915, new_bw_state->data_rate[pipe]))
return true;
}
@@ -1238,7 +1237,8 @@ int intel_bw_min_cdclk(struct drm_i915_private *i915,
min_cdclk = intel_bw_dbuf_min_cdclk(i915, bw_state);
for_each_pipe(i915, pipe)
- min_cdclk = max(min_cdclk, bw_state->min_cdclk[pipe]);
+ min_cdclk = max(min_cdclk,
+ intel_bw_crtc_min_cdclk(i915, bw_state->data_rate[pipe]));
return min_cdclk;
}
@@ -1266,9 +1266,6 @@ int intel_bw_calc_min_cdclk(struct intel_atomic_state *state,
old_bw_state = intel_atomic_get_old_bw_state(state);
skl_crtc_calc_dbuf_bw(new_bw_state, crtc_state);
-
- new_bw_state->min_cdclk[crtc->pipe] =
- intel_bw_crtc_min_cdclk(crtc_state);
}
if (!old_bw_state)
diff --git a/drivers/gpu/drm/i915/display/intel_bw.h b/drivers/gpu/drm/i915/display/intel_bw.h
index 3313e4eac4f0..e977c3586dc3 100644
--- a/drivers/gpu/drm/i915/display/intel_bw.h
+++ b/drivers/gpu/drm/i915/display/intel_bw.h
@@ -55,7 +55,6 @@ struct intel_bw_state {
*/
bool force_check_qgv;
- int min_cdclk[I915_MAX_PIPES];
unsigned int data_rate[I915_MAX_PIPES];
u8 num_active_planes[I915_MAX_PIPES];
};
--
2.45.3
next prev parent reply other threads:[~2025-03-07 18:01 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-07 18:01 [PATCH 00/14] drm/i915: sagv/bw cleanup Ville Syrjala
2025-03-07 18:01 ` Ville Syrjala [this message]
2025-03-07 18:01 ` [PATCH 02/14] drm/i915: s/intel_crtc_bw/intel_dbuf_bw/ Ville Syrjala
2025-03-07 18:01 ` [PATCH 03/14] drm/i915: Extract intel_dbuf_bw_changed() Ville Syrjala
2025-03-07 18:01 ` [PATCH 04/14] drm/i915: Pass intel_dbuf_bw to skl_*_calc_dbuf_bw() explicitly Ville Syrjala
2025-03-07 18:01 ` [PATCH 05/14] drm/i915: Avoid triggering unwanted cdclk changes due to dbuf bandwidth changes Ville Syrjala
2025-03-07 18:01 ` [PATCH 06/14] drm/i915: Do more bw readout Ville Syrjala
2025-03-07 18:01 ` [PATCH 07/14] drm/i915: Flag even inactive crtcs as "inherited" Ville Syrjala
2025-03-07 18:01 ` [PATCH 08/14] drm/i915: Drop force_check_qgv Ville Syrjala
2025-03-07 18:01 ` [PATCH 09/14] drm/i915: Extract intel_bw_modeset_checks() Ville Syrjala
2025-03-07 18:01 ` [PATCH 10/14] drm/i915: Extract intel_bw_check_sagv_mask() Ville Syrjala
2025-03-07 18:01 ` [PATCH 11/14] drm/i915: Make intel_bw_check_sagv_mask() internal to intel_bw.c Ville Syrjala
2025-03-07 18:01 ` [PATCH 12/14] drm/i915: Make intel_bw_modeset_checks() internal to intel_bw_atomic_check() Ville Syrjala
2025-03-07 18:01 ` [PATCH 13/14] drm/i915: Skip bw stuff if per-crtc sagv state doesn't change Ville Syrjala
2025-03-07 18:01 ` [PATCH 14/14] drm/i915: Eliminate intel_compute_sagv_mask() Ville Syrjala
2025-03-07 19:11 ` ✓ CI.Patch_applied: success for drm/i915: sagv/bw cleanup Patchwork
2025-03-07 19:12 ` ✗ CI.checkpatch: warning " Patchwork
2025-03-07 19:13 ` ✓ CI.KUnit: success " Patchwork
2025-03-07 19:29 ` ✓ CI.Build: " Patchwork
2025-03-07 19:32 ` ✓ CI.Hooks: " Patchwork
2025-03-07 19:33 ` ✗ CI.checksparse: warning " Patchwork
2025-03-07 19:56 ` ✓ Xe.CI.BAT: success " Patchwork
2025-03-08 23:24 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250307180139.15744-2-ville.syrjala@linux.intel.com \
--to=ville.syrjala@linux.intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox