* [PATCH v7 0/5] drm: Create a task info option for wedge events
@ 2025-06-13 18:43 André Almeida
2025-06-13 18:43 ` [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info() André Almeida
` (4 more replies)
0 siblings, 5 replies; 11+ messages in thread
From: André Almeida @ 2025-06-13 18:43 UTC (permalink / raw)
To: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx,
André Almeida
This patchset implements a request made by Xaver Hugl about wedge events:
"I'd really like to have the PID of the client that triggered the GPU
reset, so that we can kill it if multiple resets are triggered in a
row (or switch to software rendering if it's KWin itself) and show a
user-friendly notification about why their app(s) crashed, but that
can be added later."
>From https://lore.kernel.org/dri-devel/CAFZQkGwJ4qgHV8WTp2=svJ_VXhb-+Y8_VNtKB=jLsk6DqMYp9w@mail.gmail.com/
For testing, I've used amdgpu's debug_mask options debug_disable_soft_recovery
and debug_disable_gpu_ring_reset to test both wedge event paths in the driver.
To trigger a ring timeout, I've used this app:
https://gitlab.freedesktop.org/andrealmeid/gpu-timeout
Thanks!
Changelog:
v7:
- Change `char *comm` to `char comm[TASK_COMM_LEN]`
- New patches to encapsulate struct drm_wedge_task_info inside of struct
amdgpu_task_info
- Remove struct cast for struct amdgpu_task_info, now we can use `info =
&ti->task`
- Fix struct lifetime, move amdgpu_vm_put_task_info() after
drm_dev_wedged_event() call
v6:
- Check if PID >= 0 for displaying the task info
- s/app/task in a comment
v5:
- Change from app to task also in structs, commit message and docs
- Add a check for NULL or empty task name string
v4:
- Change from APP to TASK
- Add defines for event_string and pid_string length
v3:
- Make comm_string and pid_string empty when there's no app info
- Change "app that caused ..." to "app involved ..."
- Clarify that devcoredump have more information about what happened
v2:
- Rebased on top of drm/drm-next
- Added new patch for documentation
André Almeida (5):
drm: amdgpu: Create amdgpu_vm_print_task_info
drm: Create a task info option for wedge events
drm/doc: Add a section about "Task information" for the wedge API
drm: amdgpu: Use struct drm_wedge_task_info inside of struct
amdgpu_task_info
drm/amdgpu: Make use of drm_wedge_task_info
Documentation/gpu/drm-uapi.rst | 17 ++++++++++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
.../gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c | 4 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 ++++++++++++++--
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 12 ++++++-----
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 19 +++++++++++++-----
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 6 ++++--
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 4 +---
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 2 +-
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 2 +-
drivers/gpu/drm/amd/amdkfd/kfd_events.c | 2 +-
drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c | 8 ++++----
drivers/gpu/drm/drm_drv.c | 20 +++++++++++++++----
drivers/gpu/drm/i915/gt/intel_reset.c | 3 ++-
drivers/gpu/drm/xe/xe_device.c | 3 ++-
include/drm/drm_device.h | 8 ++++++++
include/drm/drm_drv.h | 3 ++-
22 files changed, 103 insertions(+), 51 deletions(-)
--
2.49.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info()
2025-06-13 18:43 [PATCH v7 0/5] drm: Create a task info option for wedge events André Almeida
@ 2025-06-13 18:43 ` André Almeida
2025-06-16 7:03 ` Christian König
2025-06-13 18:43 ` [PATCH v7 2/5] drm: Create a task info option for wedge events André Almeida
` (3 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: André Almeida @ 2025-06-13 18:43 UTC (permalink / raw)
To: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx,
André Almeida
To avoid repetitive code in amdgpu, create a function that prints the
content of struct amdgpu_task_info.
Signed-off-by: André Almeida <andrealmeid@igalia.com>
---
v7: new patch
---
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +---
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 9 +++++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c | 5 +----
drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 4 +---
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 5 +----
8 files changed, 18 insertions(+), 22 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 75262ce8db27..3d887428ca2b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -124,9 +124,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
ti = amdgpu_vm_get_task_info_pasid(ring->adev, job->pasid);
if (ti) {
- dev_err(adev->dev,
- "Process information: process %s pid %d thread %s pid %d\n",
- ti->process_name, ti->tgid, ti->task_name, ti->pid);
+ amdgpu_vm_print_task_info(adev, ti);
amdgpu_vm_put_task_info(ti);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index 3911c78f8282..f2a0132521c2 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -3156,3 +3156,12 @@ bool amdgpu_vm_is_bo_always_valid(struct amdgpu_vm *vm, struct amdgpu_bo *bo)
{
return bo && bo->tbo.base.resv == vm->root.bo->tbo.base.resv;
}
+
+inline void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
+ struct amdgpu_task_info *task_info)
+{
+ dev_err(adev->dev,
+ " Process %s pid %d thread %s pid %d\n",
+ task_info->process_name, task_info->tgid,
+ task_info->task_name, task_info->pid);
+}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index f3ad687125ad..3862a256b9b8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -668,4 +668,7 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev,
struct amdgpu_vm *vm,
struct dma_fence **fence);
+inline void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
+ struct amdgpu_task_info *task_info);
+
#endif
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
index a3e2787501f1..7923f491cf73 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
@@ -164,10 +164,7 @@ static int gmc_v10_0_process_interrupt(struct amdgpu_device *adev,
entry->src_id, entry->ring_id, entry->vmid, entry->pasid);
task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
if (task_info) {
- dev_err(adev->dev,
- " in process %s pid %d thread %s pid %d\n",
- task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ amdgpu_vm_print_task_info(adev, task_info);
amdgpu_vm_put_task_info(task_info);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
index 72211409227b..f15d691e9a20 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
@@ -134,10 +134,7 @@ static int gmc_v11_0_process_interrupt(struct amdgpu_device *adev,
entry->src_id, entry->ring_id, entry->vmid, entry->pasid);
task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
if (task_info) {
- dev_err(adev->dev,
- " in process %s pid %d thread %s pid %d)\n",
- task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ amdgpu_vm_print_task_info(adev, task_info);
amdgpu_vm_put_task_info(task_info);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
index b645d3e6a6c8..de763105fdfd 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
@@ -127,10 +127,7 @@ static int gmc_v12_0_process_interrupt(struct amdgpu_device *adev,
entry->src_id, entry->ring_id, entry->vmid, entry->pasid);
task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
if (task_info) {
- dev_err(adev->dev,
- " in process %s pid %d thread %s pid %d)\n",
- task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ amdgpu_vm_print_task_info(adev, task_info);
amdgpu_vm_put_task_info(task_info);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
index 99ca08e9bdb5..b45fa0cea9d2 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
@@ -1458,9 +1458,7 @@ static int gmc_v8_0_process_interrupt(struct amdgpu_device *adev,
task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
if (task_info) {
- dev_err(adev->dev, " for process %s pid %d thread %s pid %d\n",
- task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ amdgpu_vm_print_task_info(adev, task_info);
amdgpu_vm_put_task_info(task_info);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
index 282197f4ffb1..78f65aea03f8 100644
--- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
@@ -641,10 +641,7 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
if (task_info) {
- dev_err(adev->dev,
- " for process %s pid %d thread %s pid %d)\n",
- task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ amdgpu_vm_print_task_info(adev, task_info);
amdgpu_vm_put_task_info(task_info);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v7 2/5] drm: Create a task info option for wedge events
2025-06-13 18:43 [PATCH v7 0/5] drm: Create a task info option for wedge events André Almeida
2025-06-13 18:43 ` [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info() André Almeida
@ 2025-06-13 18:43 ` André Almeida
2025-06-15 11:01 ` Raag Jadav
2025-06-13 18:43 ` [PATCH v7 3/5] drm/doc: Add a section about "Task information" for the wedge API André Almeida
` (2 subsequent siblings)
4 siblings, 1 reply; 11+ messages in thread
From: André Almeida @ 2025-06-13 18:43 UTC (permalink / raw)
To: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx,
André Almeida
When a device get wedged, it might be caused by a guilty application.
For userspace, knowing which task was involved can be useful for some
situations, like for implementing a policy, logs or for giving a chance
for the compositor to let the user know what task was involved in the
problem. This is an optional argument, when the task info is not
available, the PID and TASK string won't appear in the event string.
Sometimes just the PID isn't enough giving that the task might be already
dead by the time userspace will try to check what was this PID's name,
so to make the life easier also notify what's the task's name in the user
event.
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> (for i915 and xe)
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Raag Jadav <raag.jadav@intel.com>
Signed-off-by: André Almeida <andrealmeid@igalia.com>
---
v7:
- Change `char *comm` to `char comm[TASK_COMM_LEN]`
v6:
- s/cause/involved
- drop string initialization
v5:
- s/app/task for struct and commit message as well
- move defines to drm_drv.c
- validates if comm is not NULL and it's not empty
v4: s/APP/TASK
v3: Make comm_string and pid_string empty when there's no app info
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +-
drivers/gpu/drm/drm_drv.c | 20 ++++++++++++++++----
drivers/gpu/drm/i915/gt/intel_reset.c | 3 ++-
drivers/gpu/drm/xe/xe_device.c | 3 ++-
include/drm/drm_device.h | 8 ++++++++
include/drm/drm_drv.h | 3 ++-
7 files changed, 32 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index e1bab6a96cb6..8a0f36f33f13 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -6364,7 +6364,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
atomic_set(&adev->reset_domain->reset_res, r);
if (!r)
- drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE);
+ drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 3d887428ca2b..0c1381b527fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -164,7 +164,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
if (amdgpu_ring_sched_ready(ring))
drm_sched_start(&ring->sched, 0);
dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
- drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE);
+ drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
goto exit;
}
dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
index 56dd61f8e05a..eba99a081ec1 100644
--- a/drivers/gpu/drm/drm_drv.c
+++ b/drivers/gpu/drm/drm_drv.c
@@ -538,10 +538,15 @@ static const char *drm_get_wedge_recovery(unsigned int opt)
}
}
+#define WEDGE_STR_LEN 32
+#define PID_STR_LEN 15
+#define COMM_STR_LEN (TASK_COMM_LEN + 5)
+
/**
* drm_dev_wedged_event - generate a device wedged uevent
* @dev: DRM device
* @method: method(s) to be used for recovery
+ * @info: optional information about the guilty task
*
* This generates a device wedged uevent for the DRM device specified by @dev.
* Recovery @method\(s) of choice will be sent in the uevent environment as
@@ -554,13 +559,13 @@ static const char *drm_get_wedge_recovery(unsigned int opt)
*
* Returns: 0 on success, negative error code otherwise.
*/
-int drm_dev_wedged_event(struct drm_device *dev, unsigned long method)
+int drm_dev_wedged_event(struct drm_device *dev, unsigned long method,
+ struct drm_wedge_task_info *info)
{
const char *recovery = NULL;
unsigned int len, opt;
- /* Event string length up to 28+ characters with available methods */
- char event_string[32];
- char *envp[] = { event_string, NULL };
+ char event_string[WEDGE_STR_LEN], pid_string[PID_STR_LEN], comm_string[COMM_STR_LEN];
+ char *envp[] = { event_string, NULL, NULL, NULL };
len = scnprintf(event_string, sizeof(event_string), "%s", "WEDGED=");
@@ -582,6 +587,13 @@ int drm_dev_wedged_event(struct drm_device *dev, unsigned long method)
drm_info(dev, "device wedged, %s\n", method == DRM_WEDGE_RECOVERY_NONE ?
"but recovered through reset" : "needs recovery");
+ if (info && (info->comm[0] != '\0') && (info->pid >= 0)) {
+ snprintf(pid_string, sizeof(pid_string), "PID=%u", info->pid);
+ snprintf(comm_string, sizeof(comm_string), "TASK=%s", info->comm);
+ envp[1] = pid_string;
+ envp[2] = comm_string;
+ }
+
return kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp);
}
EXPORT_SYMBOL(drm_dev_wedged_event);
diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c b/drivers/gpu/drm/i915/gt/intel_reset.c
index dbdcfe130ad4..ba1d8fdc3c7b 100644
--- a/drivers/gpu/drm/i915/gt/intel_reset.c
+++ b/drivers/gpu/drm/i915/gt/intel_reset.c
@@ -1448,7 +1448,8 @@ static void intel_gt_reset_global(struct intel_gt *gt,
kobject_uevent_env(kobj, KOBJ_CHANGE, reset_done_event);
else
drm_dev_wedged_event(>->i915->drm,
- DRM_WEDGE_RECOVERY_REBIND | DRM_WEDGE_RECOVERY_BUS_RESET);
+ DRM_WEDGE_RECOVERY_REBIND | DRM_WEDGE_RECOVERY_BUS_RESET,
+ NULL);
}
/**
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index c02c4c4e9412..f329613e061f 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -1168,7 +1168,8 @@ void xe_device_declare_wedged(struct xe_device *xe)
/* Notify userspace of wedged device */
drm_dev_wedged_event(&xe->drm,
- DRM_WEDGE_RECOVERY_REBIND | DRM_WEDGE_RECOVERY_BUS_RESET);
+ DRM_WEDGE_RECOVERY_REBIND | DRM_WEDGE_RECOVERY_BUS_RESET,
+ NULL);
}
for_each_gt(gt, xe, id)
diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
index e2f894f1b90a..729e1c6da138 100644
--- a/include/drm/drm_device.h
+++ b/include/drm/drm_device.h
@@ -30,6 +30,14 @@ struct pci_controller;
#define DRM_WEDGE_RECOVERY_REBIND BIT(1) /* unbind + bind driver */
#define DRM_WEDGE_RECOVERY_BUS_RESET BIT(2) /* unbind + reset bus device + bind */
+/**
+ * struct drm_wedge_task_info - information about the guilty task of a wedge dev
+ */
+struct drm_wedge_task_info {
+ pid_t pid;
+ char comm[TASK_COMM_LEN];
+};
+
/**
* enum switch_power_state - power state of drm device
*/
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index 63b51942d606..3f76a32d6b84 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -487,7 +487,8 @@ void drm_put_dev(struct drm_device *dev);
bool drm_dev_enter(struct drm_device *dev, int *idx);
void drm_dev_exit(int idx);
void drm_dev_unplug(struct drm_device *dev);
-int drm_dev_wedged_event(struct drm_device *dev, unsigned long method);
+int drm_dev_wedged_event(struct drm_device *dev, unsigned long method,
+ struct drm_wedge_task_info *info);
/**
* drm_dev_is_unplugged - is a DRM device unplugged
--
2.49.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v7 3/5] drm/doc: Add a section about "Task information" for the wedge API
2025-06-13 18:43 [PATCH v7 0/5] drm: Create a task info option for wedge events André Almeida
2025-06-13 18:43 ` [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info() André Almeida
2025-06-13 18:43 ` [PATCH v7 2/5] drm: Create a task info option for wedge events André Almeida
@ 2025-06-13 18:43 ` André Almeida
2025-06-13 18:43 ` [PATCH v7 4/5] drm: amdgpu: Use struct drm_wedge_task_info inside of struct amdgpu_task_info André Almeida
2025-06-13 18:43 ` [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info André Almeida
4 siblings, 0 replies; 11+ messages in thread
From: André Almeida @ 2025-06-13 18:43 UTC (permalink / raw)
To: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx,
André Almeida
Add a section about "Task information" for the wedge API.
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Raag Jadav <raag.jadav@intel.com>
Signed-off-by: André Almeida <andrealmeid@igalia.com>
---
v5:
- Change app to task in the text as well
v4:
- Change APP to TASK
v3:
- Change "app that caused ..." to "app involved ..."
- Clarify that devcoredump have more information about what happened
- Update that PID and APP will be empty if there's no app info
---
Documentation/gpu/drm-uapi.rst | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/Documentation/gpu/drm-uapi.rst b/Documentation/gpu/drm-uapi.rst
index 4863a4deb0ee..263e5a97c080 100644
--- a/Documentation/gpu/drm-uapi.rst
+++ b/Documentation/gpu/drm-uapi.rst
@@ -446,6 +446,23 @@ telemetry information (devcoredump, syslog). This is useful because the first
hang is usually the most critical one which can result in consequential hangs or
complete wedging.
+Task information
+---------------
+
+The information about which application (if any) was involved in the device
+wedging is useful for userspace if they want to notify the user about what
+happened (e.g. the compositor display a message to the user "The <task name>
+caused a graphical error and the system recovered") or to implement policies
+(e.g. the daemon may "ban" an task that keeps resetting the device). If the task
+information is available, the uevent will display as ``PID=<pid>`` and
+``TASK=<task name>``. Otherwise, ``PID`` and ``TASK`` will not appear in the
+event string.
+
+The reliability of this information is driver and hardware specific, and should
+be taken with a caution regarding it's precision. To have a big picture of what
+really happened, the devcoredump file provides should have much more detailed
+information about the device state and about the event.
+
Consumer prerequisites
----------------------
--
2.49.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v7 4/5] drm: amdgpu: Use struct drm_wedge_task_info inside of struct amdgpu_task_info
2025-06-13 18:43 [PATCH v7 0/5] drm: Create a task info option for wedge events André Almeida
` (2 preceding siblings ...)
2025-06-13 18:43 ` [PATCH v7 3/5] drm/doc: Add a section about "Task information" for the wedge API André Almeida
@ 2025-06-13 18:43 ` André Almeida
2025-06-16 7:05 ` Christian König
2025-06-13 18:43 ` [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info André Almeida
4 siblings, 1 reply; 11+ messages in thread
From: André Almeida @ 2025-06-13 18:43 UTC (permalink / raw)
To: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx,
André Almeida
To avoid a cast when calling drm_dev_wedged_event(), replace pid and
task name inside of struct amdgpu_task_info with struct
drm_wedge_task_info.
Signed-off-by: André Almeida <andrealmeid@igalia.com>
---
v7: New patch
---
drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c | 4 ++--
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 12 ++++++------
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +--
drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 2 +-
drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 2 +-
drivers/gpu/drm/amd/amdkfd/kfd_events.c | 2 +-
drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c | 8 ++++----
9 files changed, 18 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
index 8e626f50b362..dac4b926e7be 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
@@ -1786,7 +1786,7 @@ static int amdgpu_debugfs_vm_info_show(struct seq_file *m, void *unused)
ti = amdgpu_vm_get_task_info_vm(vm);
if (ti) {
- seq_printf(m, "pid:%d\tProcess:%s ----------\n", ti->pid, ti->process_name);
+ seq_printf(m, "pid:%d\tProcess:%s ----------\n", ti->task.pid, ti->process_name);
amdgpu_vm_put_task_info(ti);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
index 7b50741dc097..8a026bc9ea44 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
@@ -220,10 +220,10 @@ amdgpu_devcoredump_read(char *buffer, loff_t offset, size_t count,
drm_printf(&p, "time: %lld.%09ld\n", coredump->reset_time.tv_sec,
coredump->reset_time.tv_nsec);
- if (coredump->reset_task_info.pid)
+ if (coredump->reset_task_info.task.pid)
drm_printf(&p, "process_name: %s PID: %d\n",
coredump->reset_task_info.process_name,
- coredump->reset_task_info.pid);
+ coredump->reset_task_info.task.pid);
/* SOC Information */
drm_printf(&p, "\nSOC Information\n");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index 0ecc88df7208..e5e33a68d935 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -329,7 +329,7 @@ static int amdgpu_gem_object_open(struct drm_gem_object *obj,
dev_warn(adev->dev, "validate_and_fence failed: %d\n", r);
if (ti) {
- dev_warn(adev->dev, "pid %d\n", ti->pid);
+ dev_warn(adev->dev, "pid %d\n", ti->task.pid);
amdgpu_vm_put_task_info(ti);
}
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index f2a0132521c2..0efd3fc7cf3e 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -622,7 +622,7 @@ int amdgpu_vm_validate(struct amdgpu_device *adev, struct amdgpu_vm *vm,
pr_warn_ratelimited("Evicted user BO is not reserved\n");
if (ti) {
- pr_warn_ratelimited("pid %d\n", ti->pid);
+ pr_warn_ratelimited("pid %d\n", ti->task.pid);
amdgpu_vm_put_task_info(ti);
}
@@ -2507,11 +2507,11 @@ void amdgpu_vm_set_task_info(struct amdgpu_vm *vm)
if (!vm->task_info)
return;
- if (vm->task_info->pid == current->pid)
+ if (vm->task_info->task.pid == current->pid)
return;
- vm->task_info->pid = current->pid;
- get_task_comm(vm->task_info->task_name, current);
+ vm->task_info->task.pid = current->pid;
+ get_task_comm(vm->task_info->task.comm, current);
if (current->group_leader->mm != current->mm)
return;
@@ -2774,7 +2774,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
dev_warn(adev->dev,
"VM memory stats for proc %s(%d) task %s(%d) is non-zero when fini\n",
- ti->process_name, ti->pid, ti->task_name, ti->tgid);
+ ti->process_name, ti->task.pid, ti->task.comm, ti->tgid);
}
amdgpu_vm_put_task_info(vm->task_info);
@@ -3163,5 +3163,5 @@ inline void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
dev_err(adev->dev,
" Process %s pid %d thread %s pid %d\n",
task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ task_info->task.comm, task_info->task.pid);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
index 3862a256b9b8..b5c3af1c5e99 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
@@ -236,9 +236,8 @@ struct amdgpu_vm_pte_funcs {
};
struct amdgpu_task_info {
+ struct drm_wedge_task_info task;
char process_name[TASK_COMM_LEN];
- char task_name[TASK_COMM_LEN];
- pid_t pid;
pid_t tgid;
struct kref refcount;
};
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
index 33ed2b158fcd..f38004e6064e 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
@@ -2187,7 +2187,7 @@ static int sdma_v4_0_print_iv_entry(struct amdgpu_device *adev,
dev_dbg_ratelimited(adev->dev,
" for process %s pid %d thread %s pid %d\n",
task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ task_info->task.comm, task_info->task.pid);
amdgpu_vm_put_task_info(task_info);
}
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
index 9c169112a5e7..bcde34e4e0a1 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
@@ -1884,7 +1884,7 @@ static int sdma_v4_4_2_print_iv_entry(struct amdgpu_device *adev,
if (task_info) {
dev_dbg_ratelimited(adev->dev, " for process %s pid %d thread %s pid %d\n",
task_info->process_name, task_info->tgid,
- task_info->task_name, task_info->pid);
+ task_info->task.comm, task_info->task.pid);
amdgpu_vm_put_task_info(task_info);
}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
index 2b294ada3ec0..82905f3e54dd 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
@@ -1302,7 +1302,7 @@ void kfd_signal_reset_event(struct kfd_node *dev)
if (ti) {
dev_err(dev->adev->dev,
"Queues reset on process %s tid %d thread %s pid %d\n",
- ti->process_name, ti->tgid, ti->task_name, ti->pid);
+ ti->process_name, ti->tgid, ti->task.comm, ti->task.pid);
amdgpu_vm_put_task_info(ti);
}
}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
index 83d9384ac815..a499449fcb06 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
@@ -253,9 +253,9 @@ void kfd_smi_event_update_vmfault(struct kfd_node *dev, uint16_t pasid)
task_info = amdgpu_vm_get_task_info_pasid(dev->adev, pasid);
if (task_info) {
/* Report VM faults from user applications, not retry from kernel */
- if (task_info->pid)
+ if (task_info->task.pid)
kfd_smi_event_add(0, dev, KFD_SMI_EVENT_VMFAULT, KFD_EVENT_FMT_VMFAULT(
- task_info->pid, task_info->task_name));
+ task_info->task.pid, task_info->task.comm));
amdgpu_vm_put_task_info(task_info);
}
}
@@ -359,8 +359,8 @@ void kfd_smi_event_process(struct kfd_process_device *pdd, bool start)
kfd_smi_event_add(0, pdd->dev,
start ? KFD_SMI_EVENT_PROCESS_START :
KFD_SMI_EVENT_PROCESS_END,
- KFD_EVENT_FMT_PROCESS(task_info->pid,
- task_info->task_name));
+ KFD_EVENT_FMT_PROCESS(task_info->task.pid,
+ task_info->task.comm));
amdgpu_vm_put_task_info(task_info);
}
}
--
2.49.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info
2025-06-13 18:43 [PATCH v7 0/5] drm: Create a task info option for wedge events André Almeida
` (3 preceding siblings ...)
2025-06-13 18:43 ` [PATCH v7 4/5] drm: amdgpu: Use struct drm_wedge_task_info inside of struct amdgpu_task_info André Almeida
@ 2025-06-13 18:43 ` André Almeida
2025-06-13 21:15 ` Alex Deucher
2025-06-16 7:10 ` Christian König
4 siblings, 2 replies; 11+ messages in thread
From: André Almeida @ 2025-06-13 18:43 UTC (permalink / raw)
To: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx,
André Almeida
To notify userspace about which task (if any) made the device get in a
wedge state, make use of drm_wedge_task_info parameter, filling it with
the task PID and name.
Signed-off-by: André Almeida <andrealmeid@igalia.com>
---
v7:
- Remove struct cast, now we can use `info = &ti->task`
- Fix struct lifetime, move amdgpu_vm_put_task_info() after
drm_dev_wedged_event() call
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 +++++++++++++++--
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 8 ++++++--
2 files changed, 21 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 8a0f36f33f13..67cff53678e1 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -6363,8 +6363,21 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
atomic_set(&adev->reset_domain->reset_res, r);
- if (!r)
- drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
+ if (!r) {
+ struct drm_wedge_task_info *info = NULL;
+ struct amdgpu_task_info *ti = NULL;
+
+ if (job) {
+ ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid);
+ if (ti)
+ info = &ti->task;
+ }
+
+ drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, info);
+
+ if (ti)
+ amdgpu_vm_put_task_info(ti);
+ }
return r;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 0c1381b527fe..f061f691f556 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -89,6 +89,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
{
struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
struct amdgpu_job *job = to_amdgpu_job(s_job);
+ struct drm_wedge_task_info *info = NULL;
struct amdgpu_task_info *ti;
struct amdgpu_device *adev = ring->adev;
int idx;
@@ -125,7 +126,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
ti = amdgpu_vm_get_task_info_pasid(ring->adev, job->pasid);
if (ti) {
amdgpu_vm_print_task_info(adev, ti);
- amdgpu_vm_put_task_info(ti);
+ info = &ti->task;
}
/* attempt a per ring reset */
@@ -164,13 +165,16 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
if (amdgpu_ring_sched_ready(ring))
drm_sched_start(&ring->sched, 0);
dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
- drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
+ drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, info);
goto exit;
}
dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
}
dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+ if (ti)
+ amdgpu_vm_put_task_info(ti);
+
if (amdgpu_device_should_recover_gpu(ring->adev)) {
struct amdgpu_reset_context reset_context;
memset(&reset_context, 0, sizeof(reset_context));
--
2.49.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info
2025-06-13 18:43 ` [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info André Almeida
@ 2025-06-13 21:15 ` Alex Deucher
2025-06-16 7:10 ` Christian König
1 sibling, 0 replies; 11+ messages in thread
From: Alex Deucher @ 2025-06-13 21:15 UTC (permalink / raw)
To: André Almeida
Cc: Alex Deucher, Christian König, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas, dri-devel, linux-kernel, kernel-dev, amd-gfx,
intel-xe, intel-gfx
On Fri, Jun 13, 2025 at 2:44 PM André Almeida <andrealmeid@igalia.com> wrote:
>
> To notify userspace about which task (if any) made the device get in a
> wedge state, make use of drm_wedge_task_info parameter, filling it with
> the task PID and name.
>
> Signed-off-by: André Almeida <andrealmeid@igalia.com>
If you want the guilty state to be reliably correct for GC, you'll
need this patch:
https://lists.freedesktop.org/archives/amd-gfx/2025-June/125715.html
GC is pipelined, so the hardware will start working on subsequent jobs
before prior submissions are complete. This can lead to subsequent
jobs causing a hang which gets attributed to a prior job. With that
patch in place, the driver will force a fence wait between jobs from
different contexts to ensure they are serialized.
Alex
> ---
> v7:
> - Remove struct cast, now we can use `info = &ti->task`
> - Fix struct lifetime, move amdgpu_vm_put_task_info() after
> drm_dev_wedged_event() call
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 +++++++++++++++--
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 8 ++++++--
> 2 files changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 8a0f36f33f13..67cff53678e1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -6363,8 +6363,21 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
>
> atomic_set(&adev->reset_domain->reset_res, r);
>
> - if (!r)
> - drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
> + if (!r) {
> + struct drm_wedge_task_info *info = NULL;
> + struct amdgpu_task_info *ti = NULL;
> +
> + if (job) {
> + ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid);
> + if (ti)
> + info = &ti->task;
> + }
> +
> + drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, info);
> +
> + if (ti)
> + amdgpu_vm_put_task_info(ti);
> + }
>
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 0c1381b527fe..f061f691f556 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -89,6 +89,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> {
> struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
> struct amdgpu_job *job = to_amdgpu_job(s_job);
> + struct drm_wedge_task_info *info = NULL;
> struct amdgpu_task_info *ti;
> struct amdgpu_device *adev = ring->adev;
> int idx;
> @@ -125,7 +126,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> ti = amdgpu_vm_get_task_info_pasid(ring->adev, job->pasid);
> if (ti) {
> amdgpu_vm_print_task_info(adev, ti);
> - amdgpu_vm_put_task_info(ti);
> + info = &ti->task;
> }
>
> /* attempt a per ring reset */
> @@ -164,13 +165,16 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> if (amdgpu_ring_sched_ready(ring))
> drm_sched_start(&ring->sched, 0);
> dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
> - drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
> + drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, info);
> goto exit;
> }
> dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
> }
> dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
>
> + if (ti)
> + amdgpu_vm_put_task_info(ti);
> +
> if (amdgpu_device_should_recover_gpu(ring->adev)) {
> struct amdgpu_reset_context reset_context;
> memset(&reset_context, 0, sizeof(reset_context));
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 2/5] drm: Create a task info option for wedge events
2025-06-13 18:43 ` [PATCH v7 2/5] drm: Create a task info option for wedge events André Almeida
@ 2025-06-15 11:01 ` Raag Jadav
0 siblings, 0 replies; 11+ messages in thread
From: Raag Jadav @ 2025-06-15 11:01 UTC (permalink / raw)
To: André Almeida
Cc: Alex Deucher, Christian König, siqueira, airlied, simona,
rodrigo.vivi, jani.nikula, Xaver Hugl, Krzysztof Karas, dri-devel,
linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx
On Fri, Jun 13, 2025 at 03:43:45PM -0300, André Almeida wrote:
> When a device get wedged, it might be caused by a guilty application.
> For userspace, knowing which task was involved can be useful for some
> situations, like for implementing a policy, logs or for giving a chance
> for the compositor to let the user know what task was involved in the
> problem. This is an optional argument, when the task info is not
> available, the PID and TASK string won't appear in the event string.
>
> Sometimes just the PID isn't enough giving that the task might be already
> dead by the time userspace will try to check what was this PID's name,
> so to make the life easier also notify what's the task's name in the user
> event.
>
> Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com> (for i915 and xe)
> Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
> Reviewed-by: Raag Jadav <raag.jadav@intel.com>
Although I'm okay with this version, a few aesthetic nits below.
> Signed-off-by: André Almeida <andrealmeid@igalia.com>
...
> diff --git a/drivers/gpu/drm/drm_drv.c b/drivers/gpu/drm/drm_drv.c
> index 56dd61f8e05a..eba99a081ec1 100644
> --- a/drivers/gpu/drm/drm_drv.c
> +++ b/drivers/gpu/drm/drm_drv.c
> @@ -538,10 +538,15 @@ static const char *drm_get_wedge_recovery(unsigned int opt)
> }
> }
>
> +#define WEDGE_STR_LEN 32
> +#define PID_STR_LEN 15
> +#define COMM_STR_LEN (TASK_COMM_LEN + 5)
Align the values using tabs for readability, and since you're using
TASK_COMM_LEN here please include sched.h instead of relying on
intermediate header which may not guarantee it for other archs and
randconfigs.
> +
> /**
> * drm_dev_wedged_event - generate a device wedged uevent
> * @dev: DRM device
> * @method: method(s) to be used for recovery
> + * @info: optional information about the guilty task
> *
> * This generates a device wedged uevent for the DRM device specified by @dev.
> * Recovery @method\(s) of choice will be sent in the uevent environment as
> @@ -554,13 +559,13 @@ static const char *drm_get_wedge_recovery(unsigned int opt)
> *
> * Returns: 0 on success, negative error code otherwise.
> */
> -int drm_dev_wedged_event(struct drm_device *dev, unsigned long method)
> +int drm_dev_wedged_event(struct drm_device *dev, unsigned long method,
> + struct drm_wedge_task_info *info)
> {
> const char *recovery = NULL;
> unsigned int len, opt;
> - /* Event string length up to 28+ characters with available methods */
> - char event_string[32];
> - char *envp[] = { event_string, NULL };
> + char event_string[WEDGE_STR_LEN], pid_string[PID_STR_LEN], comm_string[COMM_STR_LEN];
> + char *envp[] = { event_string, NULL, NULL, NULL };
Let's make this reverse xmas order and be consistent with other helpers
in this file.
> len = scnprintf(event_string, sizeof(event_string), "%s", "WEDGED=");
>
> @@ -582,6 +587,13 @@ int drm_dev_wedged_event(struct drm_device *dev, unsigned long method)
> drm_info(dev, "device wedged, %s\n", method == DRM_WEDGE_RECOVERY_NONE ?
> "but recovered through reset" : "needs recovery");
>
> + if (info && (info->comm[0] != '\0') && (info->pid >= 0)) {
> + snprintf(pid_string, sizeof(pid_string), "PID=%u", info->pid);
> + snprintf(comm_string, sizeof(comm_string), "TASK=%s", info->comm);
> + envp[1] = pid_string;
> + envp[2] = comm_string;
> + }
> +
> return kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp);
> }
> EXPORT_SYMBOL(drm_dev_wedged_event);
...
> diff --git a/include/drm/drm_device.h b/include/drm/drm_device.h
> index e2f894f1b90a..729e1c6da138 100644
> --- a/include/drm/drm_device.h
> +++ b/include/drm/drm_device.h
> @@ -30,6 +30,14 @@ struct pci_controller;
> #define DRM_WEDGE_RECOVERY_REBIND BIT(1) /* unbind + bind driver */
> #define DRM_WEDGE_RECOVERY_BUS_RESET BIT(2) /* unbind + reset bus device + bind */
>
> +/**
> + * struct drm_wedge_task_info - information about the guilty task of a wedge dev
> + */
> +struct drm_wedge_task_info {
> + pid_t pid;
> + char comm[TASK_COMM_LEN];
Ditto for sched.h.
Raag
> +};
> +
> /**
> * enum switch_power_state - power state of drm device
> */
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index 63b51942d606..3f76a32d6b84 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -487,7 +487,8 @@ void drm_put_dev(struct drm_device *dev);
> bool drm_dev_enter(struct drm_device *dev, int *idx);
> void drm_dev_exit(int idx);
> void drm_dev_unplug(struct drm_device *dev);
> -int drm_dev_wedged_event(struct drm_device *dev, unsigned long method);
> +int drm_dev_wedged_event(struct drm_device *dev, unsigned long method,
> + struct drm_wedge_task_info *info);
>
> /**
> * drm_dev_is_unplugged - is a DRM device unplugged
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info()
2025-06-13 18:43 ` [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info() André Almeida
@ 2025-06-16 7:03 ` Christian König
0 siblings, 0 replies; 11+ messages in thread
From: Christian König @ 2025-06-16 7:03 UTC (permalink / raw)
To: André Almeida, Alex Deucher, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx
On 6/13/25 20:43, André Almeida wrote:
> To avoid repetitive code in amdgpu, create a function that prints the
> content of struct amdgpu_task_info.
>
> Signed-off-by: André Almeida <andrealmeid@igalia.com>
> ---
> v7: new patch
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 4 +---
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 9 +++++++++
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +++
> drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c | 5 +----
> drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c | 5 +----
> drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c | 5 +----
> drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c | 4 +---
> drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c | 5 +----
> 8 files changed, 18 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 75262ce8db27..3d887428ca2b 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -124,9 +124,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
>
> ti = amdgpu_vm_get_task_info_pasid(ring->adev, job->pasid);
> if (ti) {
> - dev_err(adev->dev,
> - "Process information: process %s pid %d thread %s pid %d\n",
> - ti->process_name, ti->tgid, ti->task_name, ti->pid);
> + amdgpu_vm_print_task_info(adev, ti);
> amdgpu_vm_put_task_info(ti);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index 3911c78f8282..f2a0132521c2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -3156,3 +3156,12 @@ bool amdgpu_vm_is_bo_always_valid(struct amdgpu_vm *vm, struct amdgpu_bo *bo)
> {
> return bo && bo->tbo.base.resv == vm->root.bo->tbo.base.resv;
> }
> +
> +inline void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
> + struct amdgpu_task_info *task_info)
Please drop the inline, neither performance critical nor really applicable.
Apart from that looks good to me.
Regards,
Christian.
> +{
> + dev_err(adev->dev,
> + " Process %s pid %d thread %s pid %d\n",
> + task_info->process_name, task_info->tgid,
> + task_info->task_name, task_info->pid);
> +}
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index f3ad687125ad..3862a256b9b8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -668,4 +668,7 @@ void amdgpu_vm_tlb_fence_create(struct amdgpu_device *adev,
> struct amdgpu_vm *vm,
> struct dma_fence **fence);
>
> +inline void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
> + struct amdgpu_task_info *task_info);
> +
> #endif
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
> index a3e2787501f1..7923f491cf73 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c
> @@ -164,10 +164,7 @@ static int gmc_v10_0_process_interrupt(struct amdgpu_device *adev,
> entry->src_id, entry->ring_id, entry->vmid, entry->pasid);
> task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
> if (task_info) {
> - dev_err(adev->dev,
> - " in process %s pid %d thread %s pid %d\n",
> - task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + amdgpu_vm_print_task_info(adev, task_info);
> amdgpu_vm_put_task_info(task_info);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
> index 72211409227b..f15d691e9a20 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
> @@ -134,10 +134,7 @@ static int gmc_v11_0_process_interrupt(struct amdgpu_device *adev,
> entry->src_id, entry->ring_id, entry->vmid, entry->pasid);
> task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
> if (task_info) {
> - dev_err(adev->dev,
> - " in process %s pid %d thread %s pid %d)\n",
> - task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + amdgpu_vm_print_task_info(adev, task_info);
> amdgpu_vm_put_task_info(task_info);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
> index b645d3e6a6c8..de763105fdfd 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
> @@ -127,10 +127,7 @@ static int gmc_v12_0_process_interrupt(struct amdgpu_device *adev,
> entry->src_id, entry->ring_id, entry->vmid, entry->pasid);
> task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
> if (task_info) {
> - dev_err(adev->dev,
> - " in process %s pid %d thread %s pid %d)\n",
> - task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + amdgpu_vm_print_task_info(adev, task_info);
> amdgpu_vm_put_task_info(task_info);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> index 99ca08e9bdb5..b45fa0cea9d2 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c
> @@ -1458,9 +1458,7 @@ static int gmc_v8_0_process_interrupt(struct amdgpu_device *adev,
>
> task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
> if (task_info) {
> - dev_err(adev->dev, " for process %s pid %d thread %s pid %d\n",
> - task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + amdgpu_vm_print_task_info(adev, task_info);
> amdgpu_vm_put_task_info(task_info);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> index 282197f4ffb1..78f65aea03f8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
> @@ -641,10 +641,7 @@ static int gmc_v9_0_process_interrupt(struct amdgpu_device *adev,
>
> task_info = amdgpu_vm_get_task_info_pasid(adev, entry->pasid);
> if (task_info) {
> - dev_err(adev->dev,
> - " for process %s pid %d thread %s pid %d)\n",
> - task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + amdgpu_vm_print_task_info(adev, task_info);
> amdgpu_vm_put_task_info(task_info);
> }
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 4/5] drm: amdgpu: Use struct drm_wedge_task_info inside of struct amdgpu_task_info
2025-06-13 18:43 ` [PATCH v7 4/5] drm: amdgpu: Use struct drm_wedge_task_info inside of struct amdgpu_task_info André Almeida
@ 2025-06-16 7:05 ` Christian König
0 siblings, 0 replies; 11+ messages in thread
From: Christian König @ 2025-06-16 7:05 UTC (permalink / raw)
To: André Almeida, Alex Deucher, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx
On 6/13/25 20:43, André Almeida wrote:
> To avoid a cast when calling drm_dev_wedged_event(), replace pid and
> task name inside of struct amdgpu_task_info with struct
> drm_wedge_task_info.
>
> Signed-off-by: André Almeida <andrealmeid@igalia.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
> ---
> v7: New patch
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c | 4 ++--
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 12 ++++++------
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h | 3 +--
> drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 2 +-
> drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c | 2 +-
> drivers/gpu/drm/amd/amdkfd/kfd_events.c | 2 +-
> drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c | 8 ++++----
> 9 files changed, 18 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> index 8e626f50b362..dac4b926e7be 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c
> @@ -1786,7 +1786,7 @@ static int amdgpu_debugfs_vm_info_show(struct seq_file *m, void *unused)
>
> ti = amdgpu_vm_get_task_info_vm(vm);
> if (ti) {
> - seq_printf(m, "pid:%d\tProcess:%s ----------\n", ti->pid, ti->process_name);
> + seq_printf(m, "pid:%d\tProcess:%s ----------\n", ti->task.pid, ti->process_name);
> amdgpu_vm_put_task_info(ti);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
> index 7b50741dc097..8a026bc9ea44 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
> @@ -220,10 +220,10 @@ amdgpu_devcoredump_read(char *buffer, loff_t offset, size_t count,
> drm_printf(&p, "time: %lld.%09ld\n", coredump->reset_time.tv_sec,
> coredump->reset_time.tv_nsec);
>
> - if (coredump->reset_task_info.pid)
> + if (coredump->reset_task_info.task.pid)
> drm_printf(&p, "process_name: %s PID: %d\n",
> coredump->reset_task_info.process_name,
> - coredump->reset_task_info.pid);
> + coredump->reset_task_info.task.pid);
>
> /* SOC Information */
> drm_printf(&p, "\nSOC Information\n");
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index 0ecc88df7208..e5e33a68d935 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -329,7 +329,7 @@ static int amdgpu_gem_object_open(struct drm_gem_object *obj,
>
> dev_warn(adev->dev, "validate_and_fence failed: %d\n", r);
> if (ti) {
> - dev_warn(adev->dev, "pid %d\n", ti->pid);
> + dev_warn(adev->dev, "pid %d\n", ti->task.pid);
> amdgpu_vm_put_task_info(ti);
> }
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> index f2a0132521c2..0efd3fc7cf3e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
> @@ -622,7 +622,7 @@ int amdgpu_vm_validate(struct amdgpu_device *adev, struct amdgpu_vm *vm,
>
> pr_warn_ratelimited("Evicted user BO is not reserved\n");
> if (ti) {
> - pr_warn_ratelimited("pid %d\n", ti->pid);
> + pr_warn_ratelimited("pid %d\n", ti->task.pid);
> amdgpu_vm_put_task_info(ti);
> }
>
> @@ -2507,11 +2507,11 @@ void amdgpu_vm_set_task_info(struct amdgpu_vm *vm)
> if (!vm->task_info)
> return;
>
> - if (vm->task_info->pid == current->pid)
> + if (vm->task_info->task.pid == current->pid)
> return;
>
> - vm->task_info->pid = current->pid;
> - get_task_comm(vm->task_info->task_name, current);
> + vm->task_info->task.pid = current->pid;
> + get_task_comm(vm->task_info->task.comm, current);
>
> if (current->group_leader->mm != current->mm)
> return;
> @@ -2774,7 +2774,7 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
>
> dev_warn(adev->dev,
> "VM memory stats for proc %s(%d) task %s(%d) is non-zero when fini\n",
> - ti->process_name, ti->pid, ti->task_name, ti->tgid);
> + ti->process_name, ti->task.pid, ti->task.comm, ti->tgid);
> }
>
> amdgpu_vm_put_task_info(vm->task_info);
> @@ -3163,5 +3163,5 @@ inline void amdgpu_vm_print_task_info(struct amdgpu_device *adev,
> dev_err(adev->dev,
> " Process %s pid %d thread %s pid %d\n",
> task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + task_info->task.comm, task_info->task.pid);
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> index 3862a256b9b8..b5c3af1c5e99 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
> @@ -236,9 +236,8 @@ struct amdgpu_vm_pte_funcs {
> };
>
> struct amdgpu_task_info {
> + struct drm_wedge_task_info task;
> char process_name[TASK_COMM_LEN];
> - char task_name[TASK_COMM_LEN];
> - pid_t pid;
> pid_t tgid;
> struct kref refcount;
> };
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> index 33ed2b158fcd..f38004e6064e 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
> @@ -2187,7 +2187,7 @@ static int sdma_v4_0_print_iv_entry(struct amdgpu_device *adev,
> dev_dbg_ratelimited(adev->dev,
> " for process %s pid %d thread %s pid %d\n",
> task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + task_info->task.comm, task_info->task.pid);
> amdgpu_vm_put_task_info(task_info);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> index 9c169112a5e7..bcde34e4e0a1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
> @@ -1884,7 +1884,7 @@ static int sdma_v4_4_2_print_iv_entry(struct amdgpu_device *adev,
> if (task_info) {
> dev_dbg_ratelimited(adev->dev, " for process %s pid %d thread %s pid %d\n",
> task_info->process_name, task_info->tgid,
> - task_info->task_name, task_info->pid);
> + task_info->task.comm, task_info->task.pid);
> amdgpu_vm_put_task_info(task_info);
> }
>
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> index 2b294ada3ec0..82905f3e54dd 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
> @@ -1302,7 +1302,7 @@ void kfd_signal_reset_event(struct kfd_node *dev)
> if (ti) {
> dev_err(dev->adev->dev,
> "Queues reset on process %s tid %d thread %s pid %d\n",
> - ti->process_name, ti->tgid, ti->task_name, ti->pid);
> + ti->process_name, ti->tgid, ti->task.comm, ti->task.pid);
> amdgpu_vm_put_task_info(ti);
> }
> }
> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> index 83d9384ac815..a499449fcb06 100644
> --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> @@ -253,9 +253,9 @@ void kfd_smi_event_update_vmfault(struct kfd_node *dev, uint16_t pasid)
> task_info = amdgpu_vm_get_task_info_pasid(dev->adev, pasid);
> if (task_info) {
> /* Report VM faults from user applications, not retry from kernel */
> - if (task_info->pid)
> + if (task_info->task.pid)
> kfd_smi_event_add(0, dev, KFD_SMI_EVENT_VMFAULT, KFD_EVENT_FMT_VMFAULT(
> - task_info->pid, task_info->task_name));
> + task_info->task.pid, task_info->task.comm));
> amdgpu_vm_put_task_info(task_info);
> }
> }
> @@ -359,8 +359,8 @@ void kfd_smi_event_process(struct kfd_process_device *pdd, bool start)
> kfd_smi_event_add(0, pdd->dev,
> start ? KFD_SMI_EVENT_PROCESS_START :
> KFD_SMI_EVENT_PROCESS_END,
> - KFD_EVENT_FMT_PROCESS(task_info->pid,
> - task_info->task_name));
> + KFD_EVENT_FMT_PROCESS(task_info->task.pid,
> + task_info->task.comm));
> amdgpu_vm_put_task_info(task_info);
> }
> }
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info
2025-06-13 18:43 ` [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info André Almeida
2025-06-13 21:15 ` Alex Deucher
@ 2025-06-16 7:10 ` Christian König
1 sibling, 0 replies; 11+ messages in thread
From: Christian König @ 2025-06-16 7:10 UTC (permalink / raw)
To: André Almeida, Alex Deucher, siqueira, airlied, simona,
Raag Jadav, rodrigo.vivi, jani.nikula, Xaver Hugl,
Krzysztof Karas
Cc: dri-devel, linux-kernel, kernel-dev, amd-gfx, intel-xe, intel-gfx
On 6/13/25 20:43, André Almeida wrote:
> To notify userspace about which task (if any) made the device get in a
> wedge state, make use of drm_wedge_task_info parameter, filling it with
> the task PID and name.
>
> Signed-off-by: André Almeida <andrealmeid@igalia.com>
> ---
> v7:
> - Remove struct cast, now we can use `info = &ti->task`
> - Fix struct lifetime, move amdgpu_vm_put_task_info() after
> drm_dev_wedged_event() call
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 17 +++++++++++++++--
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 8 ++++++--
> 2 files changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> index 8a0f36f33f13..67cff53678e1 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
> @@ -6363,8 +6363,21 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
>
> atomic_set(&adev->reset_domain->reset_res, r);
>
> - if (!r)
> - drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
> + if (!r) {
> + struct drm_wedge_task_info *info = NULL;
> + struct amdgpu_task_info *ti = NULL;
> +
> + if (job) {
> + ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid);
> + if (ti)
> + info = &ti->task;
Drop the local variable and write that as ti ? &ti->task : NULL
> + }
> +
> + drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, info);
here.
> +
> + if (ti)
> + amdgpu_vm_put_task_info(ti);
As rule of thumb *put* and *free* functions in the Linux kernel usually accept NULL as parameter.
It would probably be better if we do that for amdgpu_vm_put_task_info() as well and drop the extra check.
Apart from that looks good to me.
Regards,
Christian.
> + }
>
> return r;
> }
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index 0c1381b527fe..f061f691f556 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -89,6 +89,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> {
> struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
> struct amdgpu_job *job = to_amdgpu_job(s_job);
> + struct drm_wedge_task_info *info = NULL;
> struct amdgpu_task_info *ti;
> struct amdgpu_device *adev = ring->adev;
> int idx;
> @@ -125,7 +126,7 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> ti = amdgpu_vm_get_task_info_pasid(ring->adev, job->pasid);
> if (ti) {
> amdgpu_vm_print_task_info(adev, ti);
> - amdgpu_vm_put_task_info(ti);
> + info = &ti->task;
> }
>
> /* attempt a per ring reset */
> @@ -164,13 +165,16 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> if (amdgpu_ring_sched_ready(ring))
> drm_sched_start(&ring->sched, 0);
> dev_err(adev->dev, "Ring %s reset succeeded\n", ring->sched.name);
> - drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, NULL);
> + drm_dev_wedged_event(adev_to_drm(adev), DRM_WEDGE_RECOVERY_NONE, info);
> goto exit;
> }
> dev_err(adev->dev, "Ring %s reset failure\n", ring->sched.name);
> }
> dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
>
> + if (ti)
> + amdgpu_vm_put_task_info(ti);
> +
> if (amdgpu_device_should_recover_gpu(ring->adev)) {
> struct amdgpu_reset_context reset_context;
> memset(&reset_context, 0, sizeof(reset_context));
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-06-16 7:10 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-13 18:43 [PATCH v7 0/5] drm: Create a task info option for wedge events André Almeida
2025-06-13 18:43 ` [PATCH v7 1/5] drm: amdgpu: Create amdgpu_vm_print_task_info() André Almeida
2025-06-16 7:03 ` Christian König
2025-06-13 18:43 ` [PATCH v7 2/5] drm: Create a task info option for wedge events André Almeida
2025-06-15 11:01 ` Raag Jadav
2025-06-13 18:43 ` [PATCH v7 3/5] drm/doc: Add a section about "Task information" for the wedge API André Almeida
2025-06-13 18:43 ` [PATCH v7 4/5] drm: amdgpu: Use struct drm_wedge_task_info inside of struct amdgpu_task_info André Almeida
2025-06-16 7:05 ` Christian König
2025-06-13 18:43 ` [PATCH v7 5/5] drm/amdgpu: Make use of drm_wedge_task_info André Almeida
2025-06-13 21:15 ` Alex Deucher
2025-06-16 7:10 ` Christian König
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).