From: John Harrison <john.c.harrison@intel.com>
To: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>,
<intel-gfx@lists.freedesktop.org>
Cc: Alan Previn <alan.previn.teres.alexis@intel.com>,
dri-devel@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH] drm/i915/huc: always init the delayed load fence
Date: Mon, 28 Nov 2022 10:54:56 -0800 [thread overview]
Message-ID: <67018b39-296e-e4a6-80f6-1802baee240f@intel.com> (raw)
In-Reply-To: <20221123235417.1475709-1-daniele.ceraolospurio@intel.com>
On 11/23/2022 15:54, Daniele Ceraolo Spurio wrote:
> The fence is only tracking if the HuC load is in progress or not and
> doesn't distinguish between already loaded, not supported or disabled,
> so we can always initialize it to completed, no matter the actual
> support. We already do that for most platforms, but we skip it on
> GTs that lack VCS engines (i.e. MTL root GT), so fix that. Note that the
i.e. -> e.g., there is more than just MTL root GT.
> cleanup is already unconditional.
>
> While at it, move the init/fini to helper functions.
>
> Fixes: 02224691cb0f ("drm/i915/huc: fix leak of debug object in huc load fence on driver unload")
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Cc: John Harrison <John.C.Harrison@Intel.com>
> Cc: Alan Previn <alan.previn.teres.alexis@intel.com>
> ---
> drivers/gpu/drm/i915/gt/uc/intel_huc.c | 47 +++++++++++++++++++-------
> 1 file changed, 34 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
> index 0976e9101346..5f393f8e8b2e 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
> @@ -211,6 +211,30 @@ void intel_huc_unregister_gsc_notifier(struct intel_huc *huc, struct bus_type *b
> huc->delayed_load.nb.notifier_call = NULL;
> }
>
> +static void delayed_huc_load_init(struct intel_huc *huc)
> +{
> + /*
> + * Initialize fence to be complete as this is expected to be complete
> + * unless there is a delayed HuC reload in progress.
reload -> load?
> + */
> + i915_sw_fence_init(&huc->delayed_load.fence,
> + sw_fence_dummy_notify);
> + i915_sw_fence_commit(&huc->delayed_load.fence);
> +
> + hrtimer_init(&huc->delayed_load.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> + huc->delayed_load.timer.function = huc_delayed_load_timer_callback;
> +}
> +
> +static void delayed_huc_load_fini(struct intel_huc *huc)
> +{
> + /*
> + * the fence is initialized in init_early, so we need to clean it up
> + * even if HuC loading is off.
> + */
> + delayed_huc_load_complete(huc);
> + i915_sw_fence_fini(&huc->delayed_load.fence);
> +}
> +
> static bool vcs_supported(struct intel_gt *gt)
> {
> intel_engine_mask_t mask = gt->info.engine_mask;
> @@ -241,6 +265,15 @@ void intel_huc_init_early(struct intel_huc *huc)
>
> intel_uc_fw_init_early(&huc->fw, INTEL_UC_FW_TYPE_HUC);
>
> + /*
> + * we always init the fence as already completed, even if HuC is not
> + * supported. This way we don't have to distinguish between HuC not
> + * supported/disabled or already loaded, band can focus on if the load
band -> and
Looks good otherwise. So with the typos fixed:
Reviewed-by: John Harrison <John.C.Harrison@Intel.com>
> + * is currently in progress (fence not complete) or not, which is what
> + * we care about for stalling userspace submissions.
> + */
> + delayed_huc_load_init(huc);
> +
> if (!vcs_supported(gt)) {
> intel_uc_fw_change_status(&huc->fw, INTEL_UC_FIRMWARE_NOT_SUPPORTED);
> return;
> @@ -255,17 +288,6 @@ void intel_huc_init_early(struct intel_huc *huc)
> huc->status.mask = HUC_FW_VERIFIED;
> huc->status.value = HUC_FW_VERIFIED;
> }
> -
> - /*
> - * Initialize fence to be complete as this is expected to be complete
> - * unless there is a delayed HuC reload in progress.
> - */
> - i915_sw_fence_init(&huc->delayed_load.fence,
> - sw_fence_dummy_notify);
> - i915_sw_fence_commit(&huc->delayed_load.fence);
> -
> - hrtimer_init(&huc->delayed_load.timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
> - huc->delayed_load.timer.function = huc_delayed_load_timer_callback;
> }
>
> #define HUC_LOAD_MODE_STRING(x) (x ? "GSC" : "legacy")
> @@ -333,8 +355,7 @@ void intel_huc_fini(struct intel_huc *huc)
> * the fence is initialized in init_early, so we need to clean it up
> * even if HuC loading is off.
> */
> - delayed_huc_load_complete(huc);
> - i915_sw_fence_fini(&huc->delayed_load.fence);
> + delayed_huc_load_fini(huc);
>
> if (intel_uc_fw_is_loadable(&huc->fw))
> intel_uc_fw_fini(&huc->fw);
next prev parent reply other threads:[~2022-11-28 18:55 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-23 23:54 [Intel-gfx] [PATCH] drm/i915/huc: always init the delayed load fence Daniele Ceraolo Spurio
2022-11-24 0:46 ` [Intel-gfx] ✗ Fi.CI.DOCS: warning for " Patchwork
2022-11-24 1:30 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2022-11-28 18:54 ` John Harrison [this message]
2022-11-28 21:54 ` [Intel-gfx] ✓ Fi.CI.BAT: success for drm/i915/huc: always init the delayed load fence (rev2) Patchwork
2022-11-29 4:55 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=67018b39-296e-e4a6-80f6-1802baee240f@intel.com \
--to=john.c.harrison@intel.com \
--cc=alan.previn.teres.alexis@intel.com \
--cc=daniele.ceraolospurio@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox