From: "Timur Kristóf" <timur.kristof@gmail.com>
To: "Christian König" <christian.koenig@amd.com>,
amd-gfx@lists.freedesktop.org,
"Alex Deucher" <alexander.deucher@amd.com>,
"Alexandre Demers" <alexandre.f.demers@gmail.com>,
"Rodrigo Siqueira" <siqueira@igalia.com>,
"Leo Liu" <Leo.Liu@amd.com>
Subject: Re: [PATCH 04/13] drm/amdgpu/vce: Clear VCPU BO before copying firmware to it (v2)
Date: Fri, 07 Nov 2025 10:39:24 +0100 [thread overview]
Message-ID: <f4854aa398e929cf3d7186e4d32da0d0da3a7e79.camel@gmail.com> (raw)
In-Reply-To: <0512a9b1-1ab0-407e-91c3-f496a55dcea8@amd.com>
On Fri, 2025-11-07 at 10:25 +0100, Christian König wrote:
> On 11/6/25 19:44, Timur Kristóf wrote:
> > The VCPU BO doesn't only contain the VCE firmware but also other
> > ranges that the VCE uses for its stack and data. Let's initialize
> > this to zero to avoid having garbage in the VCPU BO.
> >
> > v2:
> > - Only clear BO after creation, not on resume.
> >
> > Fixes: d38ceaf99ed0 ("drm/amdgpu: add core driver (v4)")
> > Signed-off-by: Timur Kristóf <timur.kristof@gmail.com>
>
> For now this patch here is Reviewed-by: Christian König
> <christian.koenig@amd.com> since it addresses a clear problem and
> potentially even need to be back-ported to older kernels.
Thank you, I agree.
>
> But I think we should clean that up more full after we landed VCE1
> support.
Yes, I'm happy to continue this work after the VCE 1 support lands.
>
> Assuming that it hold true that VCE1-3 can't continue with sessions
> after suspend resume we should do something like this:
>
> 1. Remove all amdgpu_bo_kmap(adev->vce.vcpu_bo, &cpu_addr).
> As kernel BO the VCE FW BO is pinned and mapped on creation
> time.
This is already done by patch 6 of this series:
"Save/restore and pin VCPU BO for all VCE (v2)"
>
> 2. Rename amdgpu_vce_resume() into amdgpu_vce_reload_fw() and add the
> memset_io() there like you originally planned.
Also done by patch 6 of this series, except for the rename.
>
> 3. Also add resetting the VCE FW handles into amdgpu_vce_reload_fw().
>
> E.g. something like this:
> for (i = 0; i < AMDGPU_MAX_VCE_HANDLES; ++i) {
> atomic_set(&adev->vce.handles[i], 0);
> adev->vce.filp[i] = NULL;
> }
>
> This way the kernel will reject submissions when userspace tries
> to use the same FW handles as before the suspend/resume and prevent
> the HW from crashing.
>
> Does that sounds like a plan to you?
Yes, that sounds like a good plan to me.
>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 2 ++
> > 1 file changed, 2 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> > index b9060bcd4806..e028ad0d3b7a 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
> > @@ -187,6 +187,8 @@ int amdgpu_vce_sw_init(struct amdgpu_device
> > *adev, unsigned long size)
> > return r;
> > }
> >
> > + memset_io(adev->vce.cpu_addr, 0, size);
> > +
> > for (i = 0; i < AMDGPU_MAX_VCE_HANDLES; ++i) {
> > atomic_set(&adev->vce.handles[i], 0);
> > adev->vce.filp[i] = NULL;
next prev parent reply other threads:[~2025-11-07 9:39 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-06 18:44 [PATCH 00/13] drm/amdgpu: Support VCE1 IP block (v3) Timur Kristóf
2025-11-06 18:44 ` [PATCH 01/13] drm/amdgpu/gmc6: Place gart at low address range Timur Kristóf
2025-11-06 18:44 ` [PATCH 02/13] drm/amdgpu/gart: Add helper to bind VRAM pages (v2) Timur Kristóf
2025-11-06 18:44 ` [PATCH 03/13] drm/amdgpu/ttm: Use GART helper to map " Timur Kristóf
2025-11-06 18:44 ` [PATCH 04/13] drm/amdgpu/vce: Clear VCPU BO before copying firmware to it (v2) Timur Kristóf
2025-11-07 9:25 ` Christian König
2025-11-07 9:39 ` Timur Kristóf [this message]
2025-11-06 18:44 ` [PATCH 05/13] drm/amdgpu/vce: Move firmware load to amdgpu_vce_early_init Timur Kristóf
2025-11-06 18:44 ` [PATCH 06/13] drm/amdgpu/vce: Save/restore and pin VCPU BO for all VCE (v2) Timur Kristóf
2025-11-07 9:49 ` Christian König
2025-11-07 9:53 ` Timur Kristóf
2025-11-07 10:01 ` Christian König
2025-11-07 10:47 ` Timur Kristóf
2025-11-07 13:14 ` Christian König
2025-11-07 13:31 ` Timur Kristóf
2025-11-07 13:33 ` Christian König
2025-11-07 13:39 ` Timur Kristóf
2025-11-07 13:45 ` Christian König
2025-11-06 18:44 ` [PATCH 07/13] drm/amdgpu/vce1: Clean up register definitions Timur Kristóf
2025-11-06 18:44 ` [PATCH 08/13] drm/amdgpu/vce1: Load VCE1 firmware Timur Kristóf
2025-11-06 18:44 ` [PATCH 09/13] drm/amdgpu/vce1: Implement VCE1 IP block (v2) Timur Kristóf
2025-11-06 18:44 ` [PATCH 10/13] drm/amdgpu/vce1: Ensure VCPU BO is in lower 32-bit address space (v3) Timur Kristóf
2025-11-07 9:40 ` Christian König
2025-11-06 18:44 ` [PATCH 11/13] drm/amd/pm/si: Hook up VCE1 to SI DPM Timur Kristóf
2025-11-06 18:44 ` [PATCH 12/13] drm/amdgpu/vce1: Enable VCE1 on Tahiti, Pitcairn, Cape Verde GPUs Timur Kristóf
2025-11-06 18:44 ` [PATCH 13/13] drm/amdgpu/vce1: Workaround PLL timeout on FirePro W9000 Timur Kristóf
2025-11-07 9:41 ` Christian König
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f4854aa398e929cf3d7186e4d32da0d0da3a7e79.camel@gmail.com \
--to=timur.kristof@gmail.com \
--cc=Leo.Liu@amd.com \
--cc=alexander.deucher@amd.com \
--cc=alexandre.f.demers@gmail.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=siqueira@igalia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).