From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] drm/i915/guc/slpc: Allow SLPC to use efficient frequency
Date: Mon, 15 Aug 2022 12:51:02 -0400 [thread overview]
Message-ID: <Yvp5dp0Oa0sobOo6@intel.com> (raw)
In-Reply-To: <20220810000306.5476-1-vinay.belgaumkar@intel.com>
On Tue, Aug 09, 2022 at 05:03:06PM -0700, Vinay Belgaumkar wrote:
> Host Turbo operates at efficient frequency when GT is not idle unless
> the user or workload has forced it to a higher level. Replicate the same
> behavior in SLPC by allowing the algorithm to use efficient frequency.
> We had disabled it during boot due to concerns that it might break
> kernel ABI for min frequency. However, this is not the case, since
> SLPC will still abide by the (min,max) range limits and pcode forces
> frequency to 0 anyways when GT is in C6.
>
> We also see much better perf numbers with benchmarks like glmark2 with
> efficient frequency usage enabled.
>
> Fixes: 025cb07bebfa ("drm/i915/guc/slpc: Cache platform frequency limits")
>
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
I'm honestly surprised that our CI passed cleanly. What happens when user
request both min and max < rpe?
I'm sure that in this case GuC SLPC will put us to rpe ignoring our requests.
Or is this good enough for the users expectation because of the soft limits
showing the requested freq and we not asking to guc what it currently has as
minimal?
I just want to be sure that we are not causing any confusion for end users
out there in the case they request some min/max below RPe and start seeing
mismatches on the expectation because GuC is forcing the real min request
to RPe.
My suggestion is to ignore the RPe whenever we have a min request below it.
So GuC respects our (and users) chosen min. And restore whenever min request
is abobe rpe.
Thanks,
Rodrigo.
> ---
> drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 52 ---------------------
> 1 file changed, 52 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index e1fa1f32f29e..4b824da3048a 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -137,17 +137,6 @@ static int guc_action_slpc_set_param(struct intel_guc *guc, u8 id, u32 value)
> return ret > 0 ? -EPROTO : ret;
> }
>
> -static int guc_action_slpc_unset_param(struct intel_guc *guc, u8 id)
> -{
> - u32 request[] = {
> - GUC_ACTION_HOST2GUC_PC_SLPC_REQUEST,
> - SLPC_EVENT(SLPC_EVENT_PARAMETER_UNSET, 1),
> - id,
> - };
> -
> - return intel_guc_send(guc, request, ARRAY_SIZE(request));
> -}
> -
> static bool slpc_is_running(struct intel_guc_slpc *slpc)
> {
> return slpc_get_state(slpc) == SLPC_GLOBAL_STATE_RUNNING;
> @@ -201,16 +190,6 @@ static int slpc_set_param(struct intel_guc_slpc *slpc, u8 id, u32 value)
> return ret;
> }
>
> -static int slpc_unset_param(struct intel_guc_slpc *slpc,
> - u8 id)
> -{
> - struct intel_guc *guc = slpc_to_guc(slpc);
> -
> - GEM_BUG_ON(id >= SLPC_MAX_PARAM);
> -
> - return guc_action_slpc_unset_param(guc, id);
> -}
> -
> static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
> {
> struct drm_i915_private *i915 = slpc_to_i915(slpc);
> @@ -597,29 +576,6 @@ static int slpc_set_softlimits(struct intel_guc_slpc *slpc)
> return 0;
> }
>
> -static int slpc_ignore_eff_freq(struct intel_guc_slpc *slpc, bool ignore)
> -{
> - int ret = 0;
> -
> - if (ignore) {
> - ret = slpc_set_param(slpc,
> - SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY,
> - ignore);
> - if (!ret)
> - return slpc_set_param(slpc,
> - SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
> - slpc->min_freq);
> - } else {
> - ret = slpc_unset_param(slpc,
> - SLPC_PARAM_IGNORE_EFFICIENT_FREQUENCY);
> - if (!ret)
> - return slpc_unset_param(slpc,
> - SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ);
> - }
> -
> - return ret;
> -}
> -
> static int slpc_use_fused_rp0(struct intel_guc_slpc *slpc)
> {
> /* Force SLPC to used platform rp0 */
> @@ -679,14 +635,6 @@ int intel_guc_slpc_enable(struct intel_guc_slpc *slpc)
>
> slpc_get_rp_values(slpc);
>
> - /* Ignore efficient freq and set min to platform min */
> - ret = slpc_ignore_eff_freq(slpc, true);
> - if (unlikely(ret)) {
> - i915_probe_error(i915, "Failed to set SLPC min to RPn (%pe)\n",
> - ERR_PTR(ret));
> - return ret;
> - }
> -
> /* Set SLPC max limit to RP0 */
> ret = slpc_use_fused_rp0(slpc);
> if (unlikely(ret)) {
> --
> 2.35.1
>
next prev parent reply other threads:[~2022-08-24 17:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-10 0:03 [Intel-gfx] [PATCH] drm/i915/guc/slpc: Allow SLPC to use efficient frequency Vinay Belgaumkar
2022-08-10 1:58 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for " Patchwork
2022-08-15 16:51 ` Rodrigo Vivi [this message]
2022-08-15 16:52 ` [Intel-gfx] [PATCH] " Belgaumkar, Vinay
-- strict thread matches above, loose matches on Subject: below --
2022-08-14 23:46 Vinay Belgaumkar
2022-08-14 23:51 ` Belgaumkar, Vinay
2022-08-15 17:32 ` Rodrigo Vivi
2022-08-15 23:16 ` Belgaumkar, Vinay
2022-12-09 18:31 ` Dixit, Ashutosh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yvp5dp0Oa0sobOo6@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=vinay.belgaumkar@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox