From: Jani Nikula <jani.nikula@linux.intel.com>
To: Vinay Belgaumkar <vinay.belgaumkar@intel.com>,
intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] drm/i915/guc/slpc: Use non-blocking H2G for waitboost
Date: Mon, 16 May 2022 10:59:29 +0300 [thread overview]
Message-ID: <874k1pj4bi.fsf@intel.com> (raw)
In-Reply-To: <20220515060506.22084-1-vinay.belgaumkar@intel.com>
On Sat, 14 May 2022, Vinay Belgaumkar <vinay.belgaumkar@intel.com> wrote:
> SLPC min/max frequency updates require H2G calls. We are seeing
> timeouts when GuC channel is backed up and it is unable to respond
> in a timely fashion causing warnings and affecting CI.
>
> This is seen when waitboosting happens during a stress test.
> this patch updates the waitboost path to use a non-blocking
> H2G call instead, which returns as soon as the message is
> successfully transmitted.
>
> v2: Use drm_notice to report any errors that might occur while
> sending the waitboost H2G request (Tvrtko)
>
> Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
> ---
> drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c | 44 +++++++++++++++++----
> 1 file changed, 36 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> index 1db833da42df..e5e869c96262 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_slpc.c
> @@ -98,6 +98,30 @@ static u32 slpc_get_state(struct intel_guc_slpc *slpc)
> return data->header.global_state;
> }
>
> +static int guc_action_slpc_set_param_nb(struct intel_guc *guc, u8 id, u32 value)
> +{
> + u32 request[] = {
static const
> + GUC_ACTION_HOST2GUC_PC_SLPC_REQUEST,
> + SLPC_EVENT(SLPC_EVENT_PARAMETER_SET, 2),
> + id,
> + value,
> + };
> + int ret;
> +
> + ret = intel_guc_send_nb(guc, request, ARRAY_SIZE(request), 0);
> +
> + return ret > 0 ? -EPROTO : ret;
> +}
> +
> +static int slpc_set_param_nb(struct intel_guc_slpc *slpc, u8 id, u32 value)
> +{
> + struct intel_guc *guc = slpc_to_guc(slpc);
> +
> + GEM_BUG_ON(id >= SLPC_MAX_PARAM);
> +
> + return guc_action_slpc_set_param_nb(guc, id, value);
> +}
> +
> static int guc_action_slpc_set_param(struct intel_guc *guc, u8 id, u32 value)
> {
> u32 request[] = {
Ditto here, and the whole gt/uc directory seems to have tons of these
u32 action/request array variables on stack, with the required
initialization, that could be in rodata.
Please fix all of them.
BR,
Jani.
> @@ -208,12 +232,10 @@ static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
> */
>
> with_intel_runtime_pm(&i915->runtime_pm, wakeref) {
> - ret = slpc_set_param(slpc,
> - SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
> - freq);
> - if (ret)
> - i915_probe_error(i915, "Unable to force min freq to %u: %d",
> - freq, ret);
> + /* Non-blocking request will avoid stalls */
> + ret = slpc_set_param_nb(slpc,
> + SLPC_PARAM_GLOBAL_MIN_GT_UNSLICE_FREQ_MHZ,
> + freq);
> }
>
> return ret;
> @@ -222,6 +244,8 @@ static int slpc_force_min_freq(struct intel_guc_slpc *slpc, u32 freq)
> static void slpc_boost_work(struct work_struct *work)
> {
> struct intel_guc_slpc *slpc = container_of(work, typeof(*slpc), boost_work);
> + struct drm_i915_private *i915 = slpc_to_i915(slpc);
> + int err;
>
> /*
> * Raise min freq to boost. It's possible that
> @@ -231,8 +255,12 @@ static void slpc_boost_work(struct work_struct *work)
> */
> mutex_lock(&slpc->lock);
> if (atomic_read(&slpc->num_waiters)) {
> - slpc_force_min_freq(slpc, slpc->boost_freq);
> - slpc->num_boosts++;
> + err = slpc_force_min_freq(slpc, slpc->boost_freq);
> + if (!err)
> + slpc->num_boosts++;
> + else
> + drm_notice(&i915->drm, "Failed to send waitboost request (%d)\n",
> + err);
> }
> mutex_unlock(&slpc->lock);
> }
--
Jani Nikula, Intel Open Source Graphics Center
next prev parent reply other threads:[~2022-05-16 7:59 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-05-15 6:05 [Intel-gfx] [PATCH] drm/i915/guc/slpc: Use non-blocking H2G for waitboost Vinay Belgaumkar
2022-05-15 6:39 ` [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/guc/slpc: Use non-blocking H2G for waitboost (rev2) Patchwork
2022-05-15 7:51 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2022-05-16 7:59 ` Jani Nikula [this message]
2022-05-16 8:00 ` [Intel-gfx] [PATCH] drm/i915/guc/slpc: Use non-blocking H2G for waitboost Jani Nikula
2022-06-07 23:02 ` John Harrison
2022-06-07 23:04 ` John Harrison
2022-06-08 7:58 ` Jani Nikula
2022-06-07 22:29 ` Dixit, Ashutosh
2022-06-07 23:15 ` John Harrison
2022-06-08 17:39 ` Dixit, Ashutosh
2022-06-22 0:26 ` Dixit, Ashutosh
2022-06-22 20:30 ` Belgaumkar, Vinay
2022-06-22 21:28 ` Dixit, Ashutosh
2022-06-23 8:12 ` Tvrtko Ursulin
-- strict thread matches above, loose matches on Subject: below --
2022-06-23 0:32 Vinay Belgaumkar
2022-06-23 0:53 ` Dixit, Ashutosh
2022-05-05 5:40 Vinay Belgaumkar
2022-05-05 12:13 ` Tvrtko Ursulin
2022-05-05 17:21 ` Belgaumkar, Vinay
2022-05-05 18:36 ` John Harrison
2022-05-06 7:18 ` Tvrtko Ursulin
2022-05-06 16:21 ` Belgaumkar, Vinay
2022-05-06 16:43 ` John Harrison
2022-05-15 5:46 ` Belgaumkar, Vinay
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=874k1pj4bi.fsf@intel.com \
--to=jani.nikula@linux.intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=vinay.belgaumkar@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox