From: "Govindapillai, Vinod" <vinod.govindapillai@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"intel-gfx@lists.freedesktop.org"
<intel-gfx@lists.freedesktop.org>,
"Taylor, Clinton A" <clinton.a.taylor@intel.com>
Subject: Re: [PATCH v5 01/11] drm/i915/xe3lpd: Update pmdemand programming
Date: Mon, 28 Oct 2024 15:19:01 +0000 [thread overview]
Message-ID: <2690c5529e6bafeb0e35d757754622e519485a85.camel@intel.com> (raw)
In-Reply-To: <20241025204743.211510-2-clinton.a.taylor@intel.com>
On Fri, 2024-10-25 at 13:47 -0700, Clint Taylor wrote:
> From: Matt Roper <matthew.d.roper@intel.com>
>
> There are some minor changes to pmdemand handling on Xe3:
> - Active scalers are no longer tracked. We can simply skip the readout
> and programming of this field.
> - Active dbuf slices are no longer tracked. We should skip the readout
> and programming of this field and also make sure that it stays 0 in
> our software bookkeeping so that we won't erroneously return true
> from intel_pmdemand_needs_update() due to mismatches.
> - Even though there aren't enough pipes to utilize them, the size of
> the 'active pipes' field has expanded to four bits, taking over the
> register bits previously used for dbuf slices. Since the lower bits
> of the mask have moved, we need to update our reads/writes to handle
> this properly.
>
> v2: active pipes is no longer always max 3, add in the ability to go to
> 4 for PTL.
> v3: use intel_display for display_ver check, use INTEL_NUM_PIPES
> v4: add a conditional for number of pipes macro vs using 3.
> v5: reverse conditional order of v4.
> v6: undo v5 and fix num_pipes assignment
> v7: pass display struct instead of i915, checkpatch fix
>
> Bspec: 68883, 69125
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
> Signed-off-by: Matt Atwood <matthew.s.atwood@intel.com>
> Signed-off-by: Clint Taylor <Clinton.A.Taylor@intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_pmdemand.c | 73 +++++++++++++------
> drivers/gpu/drm/i915/display/intel_pmdemand.h | 4 +-
> drivers/gpu/drm/i915/i915_reg.h | 1 +
> 3 files changed, 53 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_pmdemand.c
> b/drivers/gpu/drm/i915/display/intel_pmdemand.c
> index ceaf9e3147da..32443ae8e76c 100644
> --- a/drivers/gpu/drm/i915/display/intel_pmdemand.c
> +++ b/drivers/gpu/drm/i915/display/intel_pmdemand.c
> @@ -258,6 +258,7 @@ intel_pmdemand_connector_needs_update(struct intel_atomic_state *state)
>
> static bool intel_pmdemand_needs_update(struct intel_atomic_state *state)
> {
> + struct intel_display *display = to_intel_display(state);
> const struct intel_bw_state *new_bw_state, *old_bw_state;
> const struct intel_cdclk_state *new_cdclk_state, *old_cdclk_state;
> const struct intel_crtc_state *new_crtc_state, *old_crtc_state;
> @@ -274,12 +275,16 @@ static bool intel_pmdemand_needs_update(struct intel_atomic_state *state)
> new_dbuf_state = intel_atomic_get_new_dbuf_state(state);
> old_dbuf_state = intel_atomic_get_old_dbuf_state(state);
> if (new_dbuf_state &&
> - (new_dbuf_state->active_pipes !=
> - old_dbuf_state->active_pipes ||
> - new_dbuf_state->enabled_slices !=
> - old_dbuf_state->enabled_slices))
> + new_dbuf_state->active_pipes != old_dbuf_state->active_pipes)
> return true;
>
> + if (DISPLAY_VER(display) < 30) {
> + if (new_dbuf_state &&
> + new_dbuf_state->enabled_slices !=
> + old_dbuf_state->enabled_slices)
> + return true;
> + }
> +
> new_cdclk_state = intel_atomic_get_new_cdclk_state(state);
> old_cdclk_state = intel_atomic_get_old_cdclk_state(state);
> if (new_cdclk_state &&
> @@ -327,10 +332,15 @@ int intel_pmdemand_atomic_check(struct intel_atomic_state *state)
> if (IS_ERR(new_dbuf_state))
> return PTR_ERR(new_dbuf_state);
>
> - new_pmdemand_state->params.active_pipes =
> - min_t(u8, hweight8(new_dbuf_state->active_pipes), 3);
> - new_pmdemand_state->params.active_dbufs =
> - min_t(u8, hweight8(new_dbuf_state->enabled_slices), 3);
> + if (DISPLAY_VER(i915) < 30) {
> + new_pmdemand_state->params.active_dbufs =
> + min_t(u8, hweight8(new_dbuf_state->enabled_slices), 3);
> + new_pmdemand_state->params.active_pipes =
> + min_t(u8, hweight8(new_dbuf_state->active_pipes), 3);
> + } else {
> + new_pmdemand_state->params.active_pipes =
> + min_t(u8, hweight8(new_dbuf_state->active_pipes), INTEL_NUM_PIPES(i915));
> + }
>
> new_cdclk_state = intel_atomic_get_cdclk_state(state);
> if (IS_ERR(new_cdclk_state))
> @@ -395,27 +405,32 @@ intel_pmdemand_init_pmdemand_params(struct drm_i915_private *i915,
>
> reg2 = intel_de_read(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1));
>
> - /* Set 1*/
> pmdemand_state->params.qclk_gv_bw =
> REG_FIELD_GET(XELPDP_PMDEMAND_QCLK_GV_BW_MASK, reg1);
> pmdemand_state->params.voltage_index =
> REG_FIELD_GET(XELPDP_PMDEMAND_VOLTAGE_INDEX_MASK, reg1);
> pmdemand_state->params.qclk_gv_index =
> REG_FIELD_GET(XELPDP_PMDEMAND_QCLK_GV_INDEX_MASK, reg1);
> - pmdemand_state->params.active_pipes =
> - REG_FIELD_GET(XELPDP_PMDEMAND_PIPES_MASK, reg1);
> - pmdemand_state->params.active_dbufs =
> - REG_FIELD_GET(XELPDP_PMDEMAND_DBUFS_MASK, reg1);
> pmdemand_state->params.active_phys =
> REG_FIELD_GET(XELPDP_PMDEMAND_PHYS_MASK, reg1);
>
> - /* Set 2*/
> pmdemand_state->params.cdclk_freq_mhz =
> REG_FIELD_GET(XELPDP_PMDEMAND_CDCLK_FREQ_MASK, reg2);
> pmdemand_state->params.ddiclk_max =
> REG_FIELD_GET(XELPDP_PMDEMAND_DDICLK_FREQ_MASK, reg2);
> - pmdemand_state->params.scalers =
> - REG_FIELD_GET(XELPDP_PMDEMAND_SCALERS_MASK, reg2);
> +
> + if (DISPLAY_VER(i915) >= 30) {
> + pmdemand_state->params.active_pipes =
> + REG_FIELD_GET(XE3_PMDEMAND_PIPES_MASK, reg1);
> + } else {
> + pmdemand_state->params.active_pipes =
> + REG_FIELD_GET(XELPDP_PMDEMAND_PIPES_MASK, reg1);
> + pmdemand_state->params.active_dbufs =
> + REG_FIELD_GET(XELPDP_PMDEMAND_DBUFS_MASK, reg1);
> +
> + pmdemand_state->params.scalers =
> + REG_FIELD_GET(XELPDP_PMDEMAND_SCALERS_MASK, reg2);
> + }
>
> unlock:
> mutex_unlock(&i915->display.pmdemand.lock);
> @@ -442,6 +457,10 @@ void intel_pmdemand_program_dbuf(struct drm_i915_private *i915,
> {
> u32 dbufs = min_t(u32, hweight8(dbuf_slices), 3);
>
> + /* PM Demand only tracks active dbufs on pre-Xe3 platforms */
> + if (DISPLAY_VER(i915) >= 30)
> + return;
> +
> mutex_lock(&i915->display.pmdemand.lock);
> if (drm_WARN_ON(&i915->drm,
> !intel_pmdemand_check_prev_transaction(i915)))
> @@ -460,9 +479,10 @@ void intel_pmdemand_program_dbuf(struct drm_i915_private *i915,
> }
>
> static void
> -intel_pmdemand_update_params(const struct intel_pmdemand_state *new,
> - const struct intel_pmdemand_state *old,
> - u32 *reg1, u32 *reg2, bool serialized)
> +intel_pmdemand_update_params(struct intel_display *display,
> + const struct intel_pmdemand_state *new,
> + const struct intel_pmdemand_state *old,
> + u32 *reg1, u32 *reg2, bool serialized)
Wonder if this need to be aligned!
Otherwise, looks okay to me.
Reviewed-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
> {
> /*
> * The pmdemand parameter updates happens in two steps. Pre plane and
> @@ -495,16 +515,22 @@ intel_pmdemand_update_params(const struct intel_pmdemand_state *new,
> update_reg(reg1, qclk_gv_bw, XELPDP_PMDEMAND_QCLK_GV_BW_MASK);
> update_reg(reg1, voltage_index, XELPDP_PMDEMAND_VOLTAGE_INDEX_MASK);
> update_reg(reg1, qclk_gv_index, XELPDP_PMDEMAND_QCLK_GV_INDEX_MASK);
> - update_reg(reg1, active_pipes, XELPDP_PMDEMAND_PIPES_MASK);
> - update_reg(reg1, active_dbufs, XELPDP_PMDEMAND_DBUFS_MASK);
> update_reg(reg1, active_phys, XELPDP_PMDEMAND_PHYS_MASK);
>
> /* Set 2*/
> update_reg(reg2, cdclk_freq_mhz, XELPDP_PMDEMAND_CDCLK_FREQ_MASK);
> update_reg(reg2, ddiclk_max, XELPDP_PMDEMAND_DDICLK_FREQ_MASK);
> - update_reg(reg2, scalers, XELPDP_PMDEMAND_SCALERS_MASK);
> update_reg(reg2, plls, XELPDP_PMDEMAND_PLLS_MASK);
>
> + if (DISPLAY_VER(display) >= 30) {
> + update_reg(reg1, active_pipes, XE3_PMDEMAND_PIPES_MASK);
> + } else {
> + update_reg(reg1, active_pipes, XELPDP_PMDEMAND_PIPES_MASK);
> + update_reg(reg1, active_dbufs, XELPDP_PMDEMAND_DBUFS_MASK);
> +
> + update_reg(reg2, scalers, XELPDP_PMDEMAND_SCALERS_MASK);
> + }
> +
> #undef update_reg
> }
>
> @@ -514,6 +540,7 @@ intel_pmdemand_program_params(struct drm_i915_private *i915,
> const struct intel_pmdemand_state *old,
> bool serialized)
> {
> + struct intel_display *display = &i915->display;
> bool changed = false;
> u32 reg1, mod_reg1;
> u32 reg2, mod_reg2;
> @@ -529,7 +556,7 @@ intel_pmdemand_program_params(struct drm_i915_private *i915,
> reg2 = intel_de_read(i915, XELPDP_INITIATE_PMDEMAND_REQUEST(1));
> mod_reg2 = reg2;
>
> - intel_pmdemand_update_params(new, old, &mod_reg1, &mod_reg2,
> + intel_pmdemand_update_params(display, new, old, &mod_reg1, &mod_reg2,
> serialized);
>
> if (reg1 != mod_reg1) {
> diff --git a/drivers/gpu/drm/i915/display/intel_pmdemand.h
> b/drivers/gpu/drm/i915/display/intel_pmdemand.h
> index 128fd61f8f14..a1c49efdc493 100644
> --- a/drivers/gpu/drm/i915/display/intel_pmdemand.h
> +++ b/drivers/gpu/drm/i915/display/intel_pmdemand.h
> @@ -20,14 +20,14 @@ struct pmdemand_params {
> u8 voltage_index;
> u8 qclk_gv_index;
> u8 active_pipes;
> - u8 active_dbufs;
> + u8 active_dbufs; /* pre-Xe3 only */
> /* Total number of non type C active phys from active_phys_mask */
> u8 active_phys;
> u8 plls;
> u16 cdclk_freq_mhz;
> /* max from ddi_clocks[] */
> u16 ddiclk_max;
> - u8 scalers;
> + u8 scalers; /* pre-Xe3 only */
> };
>
> struct intel_pmdemand_state {
> diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
> index 405f409e9761..89e4381f8baa 100644
> --- a/drivers/gpu/drm/i915/i915_reg.h
> +++ b/drivers/gpu/drm/i915/i915_reg.h
> @@ -2696,6 +2696,7 @@
> #define XELPDP_PMDEMAND_QCLK_GV_BW_MASK REG_GENMASK(31, 16)
> #define XELPDP_PMDEMAND_VOLTAGE_INDEX_MASK REG_GENMASK(14, 12)
> #define XELPDP_PMDEMAND_QCLK_GV_INDEX_MASK REG_GENMASK(11, 8)
> +#define XE3_PMDEMAND_PIPES_MASK REG_GENMASK(7, 4)
> #define XELPDP_PMDEMAND_PIPES_MASK REG_GENMASK(7, 6)
> #define XELPDP_PMDEMAND_DBUFS_MASK REG_GENMASK(5, 4)
> #define XELPDP_PMDEMAND_PHYS_MASK REG_GENMASK(2, 0)
next prev parent reply other threads:[~2024-10-28 15:19 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-25 20:47 [PATCH v5 00/11] drm/i915/xe3lpd: ptl display patches Clint Taylor
2024-10-25 20:47 ` [PATCH v5 01/11] drm/i915/xe3lpd: Update pmdemand programming Clint Taylor
2024-10-28 15:19 ` Govindapillai, Vinod [this message]
2024-10-28 16:25 ` Gustavo Sousa
2024-10-25 20:47 ` [PATCH v5 02/11] drm/i915/xe3lpd: Disable HDCP Line Rekeying for Xe3 Clint Taylor
2024-10-25 23:13 ` Matt Roper
2024-10-25 20:47 ` [PATCH v5 03/11] drm/i915/xe3lpd: Add check to see if edp over type c is allowed Clint Taylor
2024-10-25 20:47 ` [PATCH v5 04/11] drm/i915/ptl: Define IS_PANTHERLAKE macro Clint Taylor
2024-10-25 20:47 ` [PATCH v5 05/11] drm/i915/cx0: Extend C10 check to PTL Clint Taylor
2024-10-25 20:47 ` [PATCH v5 06/11] drm/i915/cx0: Remove bus reset after every c10 transaction Clint Taylor
2024-10-28 16:30 ` Gustavo Sousa
2024-10-25 20:47 ` [PATCH v5 07/11] drm/i915/xe3lpd: Move async flip bit to PLANE_SURF register Clint Taylor
2024-10-25 20:47 ` [PATCH v5 08/11] drm/i915/xe3: Underrun recovery does not exist post Xe2 Clint Taylor
2024-10-25 20:47 ` [PATCH v5 09/11] drm/i915/display/xe3: disable x-tiled framebuffers Clint Taylor
2024-10-25 20:47 ` [PATCH v5 10/11] drm/i915/xe3lpd: Power request asserting/deasserting Clint Taylor
2024-10-28 16:39 ` Gustavo Sousa
2024-10-25 20:47 ` [PATCH v5 11/11] drm/xe/ptl: Enable PTL display Clint Taylor
2024-10-28 18:45 ` Saarinen, Jani
2024-10-28 19:12 ` Rodrigo Vivi
2024-10-28 19:17 ` Matt Roper
2024-10-25 20:53 ` ✓ CI.Patch_applied: success for drm/i915/xe3lpd: ptl display patches (rev5) Patchwork
2024-10-25 20:54 ` ✗ CI.checkpatch: warning " Patchwork
2024-10-25 20:55 ` ✓ CI.KUnit: success " Patchwork
2024-10-25 21:12 ` ✓ CI.Build: " Patchwork
2024-10-25 21:16 ` ✓ CI.Hooks: " Patchwork
2024-10-25 21:18 ` ✗ CI.checksparse: warning " Patchwork
2024-10-25 22:09 ` ✗ CI.BAT: failure " Patchwork
2024-10-27 12:20 ` ✗ CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2690c5529e6bafeb0e35d757754622e519485a85.camel@intel.com \
--to=vinod.govindapillai@intel.com \
--cc=clinton.a.taylor@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox