AMD-GFX Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Jesse Zhang <Jesse.Zhang@amd.com>
To: <amd-gfx@lists.freedesktop.org>
Cc: <Alexander.Deucher@amd.com>,
	Christian Koenig <christian.koenig@amd.com>,
	 Jesse Zhang <Jesse.Zhang@amd.com>,
	Alex Deucher <alexander.deucher@amd.com>,
	Jesse Zhang <jesse.zhang@amd.com>
Subject: [PATCH v3 1/8] drm/amdgpu: add coordinated MEC pipe reset for GFX compute queues
Date: Tue, 14 Apr 2026 16:58:48 +0800	[thread overview]
Message-ID: <20260414085926.3171086-1-Jesse.Zhang@amd.com> (raw)

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="Y", Size: 11912 bytes --]

Introduce a shared mutex and common helpers to serialize MEC pipe reset
sequences between KGD (DRM scheduler) and KFD (AMDKFD) paths. This
prevents races where one path could stop/start schedulers or reprogram
hardware while the other is in the middle of a pipe reset, potentially
leading to queue map/unmap corruption or HQD state mismatches.

The change adds:

  - mec_pipe_reset_mutex to struct amdgpu_gfx, initialized during
    device init.

  - amdgpu_gfx_mec_pipe_reset_prepare(): stops DRM schedulers and KFD
    scheduling for all compute rings on a given (xcc_id, me, pipe)
    tuple, backing up unprocessed commands except for an optional
    guilty queue that is already handled via the KGD ring reset path.

  - amdgpu_gfx_mec_pipe_restart_schedulers(): restarts all schedulers
    and KFD scheduling for the affected pipe.

  - amdgpu_gfx_mec_pipe_reset_recover_queues(): re-initializes and
    remaps each KCQ on the pipe, optionally using a timed-out fence for
    the guilty queue and collateral fences for others, then completes
    the ring reset helper sequence.

  - amdgpu_gfx_mec_pipe_reset_run(): the core orchestration routine
    that takes the mutex, invokes prepare, performs the HW pipe reset
    via either a KFD or KGD callback, restarts schedulers on error, and
    recovers queues.

The implementation correctly handles single and multi-XCC configurations
by offsetting into the compute_ring array per partition. The special
queue value AMDGPU_MEC_PIPE_RESET_NO_QUEUE allows KFD-initiated resets
where no single DRM KCQ is identified as the timeout victim.

Suggested-by:  Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Jesse Zhang <jesse.zhang@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c |   1 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c    | 196 +++++++++++++++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h    |  35 ++++
 3 files changed, 232 insertions(+)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index fbdf458758d6..62d573b6135f 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3742,6 +3742,7 @@ int amdgpu_device_init(struct amdgpu_device *adev,
 		amdgpu_sync_create(&adev->isolation[i].active);
 		amdgpu_sync_create(&adev->isolation[i].prev);
 	}
+	mutex_init(&adev->gfx.mec_pipe_reset_mutex);
 	mutex_init(&adev->gfx.userq_sch_mutex);
 	mutex_init(&adev->gfx.workload_profile_mutex);
 	mutex_init(&adev->vcn.workload_profile_mutex);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
index 2956e45c9254..8118a91f6b64 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
@@ -24,6 +24,7 @@
  */
 
 #include <linux/firmware.h>
+#include <linux/lockdep.h>
 #include <linux/pm_runtime.h>
 
 #include "amdgpu.h"
@@ -69,6 +70,201 @@ void amdgpu_queue_mask_bit_to_mec_queue(struct amdgpu_device *adev, int bit,
 
 }
 
+static bool amdgpu_gfx_ring_on_mec_pipe(struct amdgpu_ring *ring, u32 me, u32 pipe)
+{
+	if (!ring || !ring->funcs || ring->funcs->type != AMDGPU_RING_TYPE_COMPUTE)
+		return false;
+	if (ring->no_scheduler)
+		return false;
+
+	return ring->me == me && ring->pipe == pipe;
+}
+
+/* Same layout as amdgpu_gfx_run_cleaner_shader(): block of num_compute_rings per XCC. */
+static unsigned int amdgpu_gfx_mec_pipe_compute_ring_base(struct amdgpu_device *adev,
+							 u32 xcc_id)
+{
+	int num_xcc = adev->gfx.xcc_mask ? NUM_XCC(adev->gfx.xcc_mask) : 1;
+
+	if (num_xcc <= 1)
+		return 0;
+	return xcc_id * adev->gfx.num_compute_rings;
+}
+
+/**
+ * amdgpu_gfx_mec_pipe_reset_prepare - stop schedulers before MEC pipe reset HW
+ *
+ * Backs up ring state for KCQs on (@xcc_id, @me, @pipe), stops their DRM
+ * schedulers, and stops KFD scheduling for the node. The MEC queue at
+ * @guilty_queue is skipped when it is not AMDGPU_MEC_PIPE_RESET_NO_QUEUE
+ * (already backed up by amdgpu_ring_reset_helper_begin() on the KGD path).
+ *
+ * Caller must hold &adev->gfx.mec_pipe_reset_mutex (e.g. via
+ * amdgpu_gfx_mec_pipe_reset_run()).
+ */
+void amdgpu_gfx_mec_pipe_reset_prepare(struct amdgpu_device *adev,
+				       u32 xcc_id, u32 me, u32 pipe,
+				       u32 guilty_queue)
+{
+	struct amdgpu_ring *ring;
+	unsigned int j, base;
+	bool skip_by_queue = (guilty_queue == AMDGPU_MEC_PIPE_RESET_NO_QUEUE);
+
+	lockdep_assert_held(&adev->gfx.mec_pipe_reset_mutex);
+
+	base = amdgpu_gfx_mec_pipe_compute_ring_base(adev, xcc_id);
+	for (j = 0; j < adev->gfx.num_compute_rings; j++) {
+		ring = &adev->gfx.compute_ring[base + j];
+		if (!amdgpu_gfx_ring_on_mec_pipe(ring, me, pipe))
+			continue;
+		if (skip_by_queue || ring->queue != guilty_queue)
+			amdgpu_ring_backup_unprocessed_commands(ring, NULL);
+		if (amdgpu_ring_sched_ready(ring))
+			drm_sched_wqueue_stop(&ring->sched);
+	}
+}
+
+void amdgpu_gfx_mec_pipe_restart_schedulers(struct amdgpu_device *adev,
+					    u32 me, u32 pipe, u32 xcc_id)
+{
+	struct amdgpu_ring *ring;
+	unsigned int j, base;
+
+	lockdep_assert_held(&adev->gfx.mec_pipe_reset_mutex);
+
+	base = amdgpu_gfx_mec_pipe_compute_ring_base(adev, xcc_id);
+	for (j = 0; j < adev->gfx.num_compute_rings; j++) {
+		ring = &adev->gfx.compute_ring[base + j];
+		if (!amdgpu_gfx_ring_on_mec_pipe(ring, me, pipe))
+			continue;
+		if (amdgpu_ring_sched_ready(ring))
+			drm_sched_wqueue_start(&ring->sched);
+	}
+}
+
+/**
+ * amdgpu_gfx_mec_pipe_reset_recover_queues - re-init KCQs after MEC pipe reset
+ *
+ * Re-inits and remaps every kernel compute queue on (@xcc_id, @me, @pipe),
+ * restarts schedulers, then amdgpu_ring_reset_helper_end() per ring.
+ * @guilty_queue: MEC queue index of the timed-out KCQ, or
+ * AMDGPU_MEC_PIPE_RESET_NO_QUEUE when every ring uses the collateral fence;
+ * @timedout_fence must then be NULL.
+ * @kcq_init: optional IP hook for kcq_init + MES remap.
+ *
+ * Caller must hold &adev->gfx.mec_pipe_reset_mutex (e.g. via
+ * amdgpu_gfx_mec_pipe_reset_run()).
+ */
+int amdgpu_gfx_mec_pipe_reset_recover_queues(struct amdgpu_device *adev,
+					     u32 xcc_id, u32 me, u32 pipe,
+					     u32 guilty_queue,
+					     struct amdgpu_fence *timedout_fence,
+					     amdgpu_gfx_kcq_init_queue_t kcq_init)
+{
+	struct amdgpu_fence collateral_reemit = {};
+	struct amdgpu_ring *ring;
+	unsigned int j, base;
+	int err = 0;
+	bool has_guilty = (guilty_queue != AMDGPU_MEC_PIPE_RESET_NO_QUEUE);
+
+	lockdep_assert_held(&adev->gfx.mec_pipe_reset_mutex);
+
+	if (has_guilty && !timedout_fence)
+		return -EINVAL;
+
+	collateral_reemit.context = (u64)-1;
+
+	base = amdgpu_gfx_mec_pipe_compute_ring_base(adev, xcc_id);
+	if (kcq_init) {
+		for (j = 0; j < adev->gfx.num_compute_rings; j++) {
+			ring = &adev->gfx.compute_ring[base + j];
+			if (!amdgpu_gfx_ring_on_mec_pipe(ring, me, pipe))
+				continue;
+
+			err = kcq_init(ring, true);
+			if (err)
+				goto err_sched;
+			err = amdgpu_mes_map_legacy_queue(adev, ring, 0);
+			if (err)
+				goto err_sched;
+		}
+	}
+
+	amdgpu_gfx_mec_pipe_restart_schedulers(adev, me, pipe, xcc_id);
+
+	for (j = 0; j < adev->gfx.num_compute_rings; j++) {
+		ring = &adev->gfx.compute_ring[base + j];
+		if (!amdgpu_gfx_ring_on_mec_pipe(ring, me, pipe))
+			continue;
+
+		err = amdgpu_ring_reset_helper_end(
+			ring,
+			(timedout_fence && ring->queue == guilty_queue) ?
+				timedout_fence :
+				&collateral_reemit);
+		if (err) {
+			dev_err(adev->dev,
+				"ring %s failed recover after MEC pipe reset (%d)\n",
+				ring->name, err);
+			return err;
+		}
+	}
+
+	return 0;
+
+err_sched:
+	amdgpu_gfx_mec_pipe_restart_schedulers(adev, me, pipe, xcc_id);
+	return err;
+}
+
+/**
+ * amdgpu_gfx_mec_pipe_reset_run - coordinate MEC pipe reset between KGD and KFD
+ *
+ * Takes &adev->gfx.mec_pipe_reset_mutex for the full prepare → pipe HW/reset →
+ * recover sequence so KFD and KGD cannot interleave scheduler stop/start,
+ * MES map/unmap, or HQD programming on the same device.
+ *
+ * @queue: MEC queue index (required when @kcq_pipe_reset is used).
+ * AMDGPU_MEC_PIPE_RESET_NO_QUEUE is only valid with @kfd_pipe_reset (KFD path;
+ * pass @timedout_fence NULL). At least one of @kcq_pipe_reset or @kfd_pipe_reset
+ * must be non-NULL.
+ * If both are provided, only @kfd_pipe_reset is invoked.
+ *
+ * Returns: 0 on success, or a negative error code.
+ */
+int amdgpu_gfx_mec_pipe_reset_run(struct amdgpu_device *adev,
+				  u32 xcc_id, u32 me, u32 pipe, u32 queue,
+				  struct amdgpu_fence *timedout_fence,
+				  amdgpu_gfx_kcq_mec_pipe_reset_t kcq_pipe_reset,
+				  amdgpu_gfx_kfd_mec_pipe_reset_t kfd_pipe_reset,
+				  amdgpu_gfx_kcq_init_queue_t kcq_init)
+{
+	int err;
+
+	if (!kcq_pipe_reset && !kfd_pipe_reset)
+		return -EINVAL;
+
+	mutex_lock(&adev->gfx.mec_pipe_reset_mutex);
+	amdgpu_gfx_mec_pipe_reset_prepare(adev, xcc_id, me, pipe, queue);
+
+	if (kfd_pipe_reset)
+		err = kfd_pipe_reset(adev, xcc_id, me, pipe);
+	else
+		err = kcq_pipe_reset(adev, me, pipe, queue);
+
+	if (err) {
+		amdgpu_gfx_mec_pipe_restart_schedulers(adev, me, pipe, xcc_id);
+		mutex_unlock(&adev->gfx.mec_pipe_reset_mutex);
+		return err;
+	}
+
+	err = amdgpu_gfx_mec_pipe_reset_recover_queues(adev, xcc_id, me, pipe,
+							queue, timedout_fence,
+							kcq_init);
+	mutex_unlock(&adev->gfx.mec_pipe_reset_mutex);
+	return err;
+}
+
 bool amdgpu_gfx_is_mec_queue_enabled(struct amdgpu_device *adev,
 				     int xcc_id, int mec, int pipe, int queue)
 {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
index a0cf0a3b41da..a1f13262d782 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
@@ -527,6 +527,9 @@ struct amdgpu_gfx {
 	const void			*cleaner_shader_ptr;
 	bool				enable_cleaner_shader;
 	struct amdgpu_isolation_work	enforce_isolation[MAX_XCP];
+	/* Serialize MEC pipe reset prep/HW/recover between KGD and KFD */
+	struct mutex			mec_pipe_reset_mutex;
+
 	/* Mutex for synchronizing KFD scheduler operations */
 	struct mutex                    userq_sch_mutex;
 	u64				userq_sch_req_count[MAX_XCP];
@@ -603,6 +606,38 @@ int amdgpu_gfx_mec_queue_to_bit(struct amdgpu_device *adev, int mec,
 				int pipe, int queue);
 void amdgpu_queue_mask_bit_to_mec_queue(struct amdgpu_device *adev, int bit,
 				 int *mec, int *pipe, int *queue);
+
+/*
+ * Pass @queue == AMDGPU_MEC_PIPE_RESET_NO_QUEUE when no DRM KCQ is the timeout
+ * victim (e.g. KFD-driven pipe reset); all queues on the pipe are backed up in
+ * prepare and recover uses collateral fences only.
+ */
+#define AMDGPU_MEC_PIPE_RESET_NO_QUEUE		U32_MAX
+
+typedef int (*amdgpu_gfx_kcq_init_queue_t)(struct amdgpu_ring *ring, bool clear);
+typedef int (*amdgpu_gfx_kcq_mec_pipe_reset_t)(struct amdgpu_device *adev,
+					       u32 me, u32 pipe, u32 queue);
+typedef int (*amdgpu_gfx_kfd_mec_pipe_reset_t)(struct amdgpu_device *adev,
+					       u32 xcc_id, u32 me, u32 pipe);
+
+int amdgpu_gfx_mec_pipe_reset_run(struct amdgpu_device *adev,
+				  u32 xcc_id, u32 me, u32 pipe, u32 queue,
+				  struct amdgpu_fence *timedout_fence,
+				  amdgpu_gfx_kcq_mec_pipe_reset_t kcq_pipe_reset,
+				  amdgpu_gfx_kfd_mec_pipe_reset_t kfd_pipe_reset,
+				  amdgpu_gfx_kcq_init_queue_t kcq_init);
+
+void amdgpu_gfx_mec_pipe_reset_prepare(struct amdgpu_device *adev,
+				       u32 xcc_id, u32 me, u32 pipe,
+				       u32 guilty_queue);
+void amdgpu_gfx_mec_pipe_restart_schedulers(struct amdgpu_device *adev,
+					    u32 me, u32 pipe, u32 xcc_id);
+int amdgpu_gfx_mec_pipe_reset_recover_queues(
+	struct amdgpu_device *adev,
+	u32 xcc_id, u32 me, u32 pipe,
+	u32 guilty_queue,
+	struct amdgpu_fence *timedout_fence,
+	amdgpu_gfx_kcq_init_queue_t kcq_init);
 bool amdgpu_gfx_is_mec_queue_enabled(struct amdgpu_device *adev, int xcc_id,
 				     int mec, int pipe, int queue);
 bool amdgpu_gfx_is_high_priority_compute_queue(struct amdgpu_device *adev,
-- 
2.49.0


             reply	other threads:[~2026-04-14  8:59 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-14  8:58 Jesse Zhang [this message]
2026-04-14  8:58 ` [PATCH v3 2/8] drm/amdgpu/gfx11: Refactor compute pipe reset and add HQD cleanup Jesse Zhang
2026-04-14  8:58 ` [PATCH v3 3/8] drm/amdgpu/gfx11: Fall back to pipe reset if per-queue reset ring test fails Jesse Zhang
2026-04-14  8:58 ` [PATCH v3 4/8] drm/amdgpu/gfx11: enable per-pipe reset support for compute queues Jesse Zhang
2026-04-14  8:58 ` [PATCH v3 5/8] drm/amdgpu/gfx12: Refactor compute pipe reset and add HQD cleanup Jesse Zhang
2026-04-14  8:58 ` [PATCH v3 6/8] drm/amdgpu/gfx12: Fall back to pipe reset if per-queue reset ring test fails Jesse Zhang
2026-04-14  8:58 ` [PATCH v3 7/8] drm/amdgpu/gfx12: enable per-pipe reset support for compute queues Jesse Zhang
2026-04-14  8:58 ` [PATCH v3 8/8] drm/amdgpu/gfx_v12_0: set gfx.rs64_enable from PFP header on GFX12 Jesse Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260414085926.3171086-1-Jesse.Zhang@amd.com \
    --to=jesse.zhang@amd.com \
    --cc=Alexander.Deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox