From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: intel-xe@lists.freedesktop.org, Rodrigo Vivi <rodrigo.vivi@intel.com>
Subject: Re: [Intel-xe] [PATCH 2/2] drm/xe: nuke GuC on unload
Date: Thu, 24 Aug 2023 14:26:47 +0000 [thread overview]
Message-ID: <ZOdop3raHNPl1O/C@DUT025-TGLU.fm.intel.com> (raw)
In-Reply-To: <20230823175551.230686-4-matthew.auld@intel.com>
On Wed, Aug 23, 2023 at 06:55:53PM +0100, Matthew Auld wrote:
> On PVC unloading followed by reloading the module often results in a
> completely dead machine (seems to be plaguing CI). Resetting the GuC
> like we do at load seems to cure it at least when locally testing this.
>
> References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/542
> References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/597
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Seems reasonible to reset the GuC on driver unload, one question below.
> ---
> drivers/gpu/drm/xe/xe_guc.c | 16 ++++++++++++++++
> drivers/gpu/drm/xe/xe_uc.c | 5 +++++
> drivers/gpu/drm/xe/xe_uc.h | 1 +
> 3 files changed, 22 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
> index e102637c0695..bbe06686a706 100644
> --- a/drivers/gpu/drm/xe/xe_guc.c
> +++ b/drivers/gpu/drm/xe/xe_guc.c
> @@ -5,6 +5,8 @@
>
> #include "xe_guc.h"
>
> +#include <drm/drm_managed.h>
> +
> #include "generated/xe_wa_oob.h"
> #include "regs/xe_gt_regs.h"
> #include "regs/xe_guc_regs.h"
> @@ -20,6 +22,7 @@
> #include "xe_guc_submit.h"
> #include "xe_mmio.h"
> #include "xe_platform_types.h"
> +#include "xe_uc.h"
> #include "xe_uc_fw.h"
> #include "xe_wa.h"
> #include "xe_wopcm.h"
> @@ -217,6 +220,15 @@ static void guc_write_params(struct xe_guc *guc)
> xe_mmio_write32(gt, SOFT_SCRATCH(1 + i), guc->params[i]);
> }
>
> +static void guc_fini(struct drm_device *drm, void *arg)
> +{
> + struct xe_guc *guc = arg;
> +
> + xe_force_wake_get(gt_to_fw(guc_to_gt(guc)), XE_FORCEWAKE_ALL);
> + xe_uc_fini_hw(&guc_to_gt(guc)->uc);
> + xe_force_wake_put(gt_to_fw(guc_to_gt(guc)), XE_FORCEWAKE_ALL);
> +}
> +
> int xe_guc_init(struct xe_guc *guc)
> {
> struct xe_device *xe = guc_to_xe(guc);
> @@ -240,6 +252,10 @@ int xe_guc_init(struct xe_guc *guc)
> if (ret)
> goto out;
>
> + ret = drmm_add_action_or_reset(>_to_xe(gt)->drm, guc_fini, guc);
> + if (ret)
> + goto out;
> +
Any reason this is after xe_guc_ct_init but before xe_guc_pc_init? Seems
like odd placement.
Matt
> ret = xe_guc_pc_init(&guc->pc);
> if (ret)
> goto out;
> diff --git a/drivers/gpu/drm/xe/xe_uc.c b/drivers/gpu/drm/xe/xe_uc.c
> index addd6f2681b9..9c8ce504f4da 100644
> --- a/drivers/gpu/drm/xe/xe_uc.c
> +++ b/drivers/gpu/drm/xe/xe_uc.c
> @@ -167,6 +167,11 @@ int xe_uc_init_hw(struct xe_uc *uc)
> return 0;
> }
>
> +int xe_uc_fini_hw(struct xe_uc *uc)
> +{
> + return xe_uc_sanitize_reset(uc);
> +}
> +
> int xe_uc_reset_prepare(struct xe_uc *uc)
> {
> /* GuC submission not enabled, nothing to do */
> diff --git a/drivers/gpu/drm/xe/xe_uc.h b/drivers/gpu/drm/xe/xe_uc.h
> index 42219b361df5..4109ae7028af 100644
> --- a/drivers/gpu/drm/xe/xe_uc.h
> +++ b/drivers/gpu/drm/xe/xe_uc.h
> @@ -12,6 +12,7 @@ int xe_uc_init(struct xe_uc *uc);
> int xe_uc_init_hwconfig(struct xe_uc *uc);
> int xe_uc_init_post_hwconfig(struct xe_uc *uc);
> int xe_uc_init_hw(struct xe_uc *uc);
> +int xe_uc_fini_hw(struct xe_uc *uc);
> void xe_uc_gucrc_disable(struct xe_uc *uc);
> int xe_uc_reset_prepare(struct xe_uc *uc);
> void xe_uc_stop_prepare(struct xe_uc *uc);
> --
> 2.41.0
>
next prev parent reply other threads:[~2023-08-24 14:30 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-23 17:55 [Intel-xe] [PATCH 1/2] drm/xe/ct: fix resv_space print Matthew Auld
2023-08-23 17:55 ` [Intel-xe] [PATCH 2/2] drm/xe: nuke GuC on unload Matthew Auld
2023-08-24 14:26 ` Matthew Brost [this message]
2023-08-24 15:19 ` Matthew Auld
2023-08-23 18:00 ` [Intel-xe] ✓ CI.Patch_applied: success for series starting with [1/2] drm/xe/ct: fix resv_space print Patchwork
2023-08-23 18:00 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-08-23 18:01 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-08-23 18:05 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-08-23 18:06 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-08-23 18:06 ` [Intel-xe] ✗ CI.checksparse: warning " Patchwork
2023-08-23 18:32 ` [Intel-xe] ✓ CI.BAT: success " Patchwork
2023-08-24 14:17 ` [Intel-xe] [PATCH 1/2] " Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZOdop3raHNPl1O/C@DUT025-TGLU.fm.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox