From: "Khatri, Sunil" <sunil.khatri@amd.com>
To: Trigger.Huang@amd.com, amd-gfx@lists.freedesktop.org
Cc: alexander.deucher@amd.com
Subject: Re: [PATCH v4 2/2] drm/amdgpu: Do core dump immediately when job tmo
Date: Wed, 21 Aug 2024 15:32:06 +0530 [thread overview]
Message-ID: <16208ed2-e049-9fe3-74ef-81048b4d0ea1@amd.com> (raw)
In-Reply-To: <20240821083841.477392-3-Trigger.Huang@amd.com>
[-- Attachment #1: Type: text/plain, Size: 4144 bytes --]
Acked-by: Sunil Khatri <sunil.khatri@amd.com> <mailto:sunil.khatri@amd.com>
On 8/21/2024 2:08 PM, Trigger.Huang@amd.com wrote:
> From: Trigger Huang <Trigger.Huang@amd.com>
>
> Do the coredump immediately after a job timeout to get a closer
> representation of GPU's error status.
>
> V2: This will skip printing vram_lost as the GPU reset is not
> happened yet (Alex)
>
> V3: Unconditionally call the core dump as we care about all the reset
> functions(soft-recovery and queue reset and full adapter reset, Alex)
>
> V4: Do the dump after adev->job_hang = true (Sunil)
>
> Signed-off-by: Trigger Huang <Trigger.Huang@amd.com>
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 68 ++++++++++++++++++++++++-
> 1 file changed, 67 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> index c6a1783fc9ef..3000a49b3e5c 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
> @@ -30,6 +30,61 @@
> #include "amdgpu.h"
> #include "amdgpu_trace.h"
> #include "amdgpu_reset.h"
> +#include "amdgpu_dev_coredump.h"
> +#include "amdgpu_xgmi.h"
> +
> +static void amdgpu_job_do_core_dump(struct amdgpu_device *adev,
> + struct amdgpu_job *job)
> +{
> + int i;
> +
> + dev_info(adev->dev, "Dumping IP State\n");
> + for (i = 0; i < adev->num_ip_blocks; i++) {
> + if (adev->ip_blocks[i].version->funcs->dump_ip_state)
> + adev->ip_blocks[i].version->funcs
> + ->dump_ip_state((void *)adev);
> + dev_info(adev->dev, "Dumping IP State Completed\n");
> + }
> +
> + amdgpu_coredump(adev, true, false, job);
> +}
> +
> +static void amdgpu_job_core_dump(struct amdgpu_device *adev,
> + struct amdgpu_job *job)
> +{
> + struct list_head device_list, *device_list_handle = NULL;
> + struct amdgpu_device *tmp_adev = NULL;
> + struct amdgpu_hive_info *hive = NULL;
> +
> + if (!amdgpu_sriov_vf(adev))
> + hive = amdgpu_get_xgmi_hive(adev);
> + if (hive)
> + mutex_lock(&hive->hive_lock);
> + /*
> + * Reuse the logic in amdgpu_device_gpu_recover() to build list of
> + * devices for code dump
> + */
> + INIT_LIST_HEAD(&device_list);
> + if (!amdgpu_sriov_vf(adev) && (adev->gmc.xgmi.num_physical_nodes > 1) && hive) {
> + list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head)
> + list_add_tail(&tmp_adev->reset_list, &device_list);
> + if (!list_is_first(&adev->reset_list, &device_list))
> + list_rotate_to_front(&adev->reset_list, &device_list);
> + device_list_handle = &device_list;
> + } else {
> + list_add_tail(&adev->reset_list, &device_list);
> + device_list_handle = &device_list;
> + }
> +
> + /* Do the coredump for each device */
> + list_for_each_entry(tmp_adev, device_list_handle, reset_list)
> + amdgpu_job_do_core_dump(tmp_adev, job);
> +
> + if (hive) {
> + mutex_unlock(&hive->hive_lock);
> + amdgpu_put_xgmi_hive(hive);
> + }
> +}
>
> static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> {
> @@ -48,9 +103,14 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> return DRM_GPU_SCHED_STAT_ENODEV;
> }
>
> -
> adev->job_hang = true;
>
> + /*
> + * Do the coredump immediately after a job timeout to get a very
> + * close dump/snapshot/representation of GPU's current error status
> + */
> + amdgpu_job_core_dump(adev, job);
> +
> if (amdgpu_gpu_recovery &&
> amdgpu_ring_soft_recovery(ring, job->vmid, s_job->s_fence->parent)) {
> dev_err(adev->dev, "ring %s timeout, but soft recovered\n",
> @@ -101,6 +161,12 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
> reset_context.src = AMDGPU_RESET_SRC_JOB;
> clear_bit(AMDGPU_NEED_FULL_RESET, &reset_context.flags);
>
> + /*
> + * To avoid an unnecessary extra coredump, as we have already
> + * got the very close representation of GPU's error status
> + */
> + set_bit(AMDGPU_SKIP_COREDUMP, &reset_context.flags);
> +
> r = amdgpu_device_gpu_recover(ring->adev, job, &reset_context);
> if (r)
> dev_err(adev->dev, "GPU Recovery Failed: %d\n", r);
[-- Attachment #2: Type: text/html, Size: 40380 bytes --]
next prev parent reply other threads:[~2024-08-21 10:02 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-21 8:38 [PATCH v4 0/2] Improve the dev coredump for gfx job timeout scenario Trigger.Huang
2024-08-21 8:38 ` [PATCH v4 1/2] drm/amdgpu: skip printing vram_lost if needed Trigger.Huang
2024-08-21 8:38 ` [PATCH v4 2/2] drm/amdgpu: Do core dump immediately when job tmo Trigger.Huang
2024-08-21 10:02 ` Khatri, Sunil [this message]
2024-08-21 17:01 ` Deucher, Alexander
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=16208ed2-e049-9fe3-74ef-81048b4d0ea1@amd.com \
--to=sunil.khatri@amd.com \
--cc=Trigger.Huang@amd.com \
--cc=alexander.deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox