public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Lizhi Hou <lizhi.hou@amd.com>
To: Mario Limonciello <mario.limonciello@amd.com>,
	<ogabbay@kernel.org>, <quic_jhugo@quicinc.com>,
	<dri-devel@lists.freedesktop.org>,
	<maciej.falkowski@linux.intel.com>
Cc: <linux-kernel@vger.kernel.org>, <max.zhen@amd.com>,
	<sonal.santan@amd.com>
Subject: Re: [PATCH V1] accel/amdxdna: Move RPM resume into job run function
Date: Wed, 4 Feb 2026 13:15:10 -0800	[thread overview]
Message-ID: <815ff6c8-c0a2-fe72-e159-2ff5f6124730@amd.com> (raw)
In-Reply-To: <49984935-fcf5-4b69-bef4-d514ef67366b@amd.com>

Applied to drm-misc-next-fixes

On 2/4/26 10:07, Mario Limonciello wrote:
> On 2/4/26 11:11 AM, Lizhi Hou wrote:
>> Currently, amdxdna_pm_resume_get() is called during job creation, and
>> amdxdna_pm_suspend_put() is called when the hardware notifies job
>> completion. If a job is canceled before it is run, no hardware
>> completion notification is generated, resulting in an unbalanced
>> runtime PM resume/suspend pair.
>>
>> Fix this by moving amdxdna_pm_resume_get() to the job run path, ensuring
>> runtime PM is only resumed for jobs that are actually executed.
>>
>> Fixes: 063db451832b ("accel/amdxdna: Enhance runtime power management")
>> Signed-off-by: Lizhi Hou <lizhi.hou@amd.com>
> Reviewed-by: Mario Limonciello (AMD) <superm1@kernel.org>
>> ---
>>   drivers/accel/amdxdna/aie2_ctx.c | 19 +++++++++----------
>>   1 file changed, 9 insertions(+), 10 deletions(-)
>>
>> diff --git a/drivers/accel/amdxdna/aie2_ctx.c 
>> b/drivers/accel/amdxdna/aie2_ctx.c
>> index fe8f9783a73c..37d05f2e986f 100644
>> --- a/drivers/accel/amdxdna/aie2_ctx.c
>> +++ b/drivers/accel/amdxdna/aie2_ctx.c
>> @@ -306,6 +306,10 @@ aie2_sched_job_run(struct drm_sched_job *sched_job)
>>       kref_get(&job->refcnt);
>>       fence = dma_fence_get(job->fence);
>>   +    ret = amdxdna_pm_resume_get(hwctx->client->xdna);
>> +    if (ret)
>> +        goto out;
>> +
>>       if (job->drv_cmd) {
>>           switch (job->drv_cmd->opcode) {
>>           case SYNC_DEBUG_BO:
>> @@ -332,6 +336,7 @@ aie2_sched_job_run(struct drm_sched_job *sched_job)
>>     out:
>>       if (ret) {
>> +        amdxdna_pm_suspend_put(hwctx->client->xdna);
>>           dma_fence_put(job->fence);
>>           aie2_job_put(job);
>>           mmput(job->mm);
>> @@ -988,15 +993,11 @@ int aie2_cmd_submit(struct amdxdna_hwctx 
>> *hwctx, struct amdxdna_sched_job *job,
>>           goto free_chain;
>>       }
>>   -    ret = amdxdna_pm_resume_get(xdna);
>> -    if (ret)
>> -        goto cleanup_job;
>> -
>>   retry:
>>       ret = drm_gem_lock_reservations(job->bos, job->bo_cnt, 
>> &acquire_ctx);
>>       if (ret) {
>>           XDNA_WARN(xdna, "Failed to lock BOs, ret %d", ret);
>> -        goto suspend_put;
>> +        goto cleanup_job;
>>       }
>>         for (i = 0; i < job->bo_cnt; i++) {
>> @@ -1004,7 +1005,7 @@ int aie2_cmd_submit(struct amdxdna_hwctx 
>> *hwctx, struct amdxdna_sched_job *job,
>>           if (ret) {
>>               XDNA_WARN(xdna, "Failed to reserve fences %d", ret);
>>               drm_gem_unlock_reservations(job->bos, job->bo_cnt, 
>> &acquire_ctx);
>> -            goto suspend_put;
>> +            goto cleanup_job;
>>           }
>>       }
>>   @@ -1019,12 +1020,12 @@ int aie2_cmd_submit(struct amdxdna_hwctx 
>> *hwctx, struct amdxdna_sched_job *job,
>> msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
>>               } else if (time_after(jiffies, timeout)) {
>>                   ret = -ETIME;
>> -                goto suspend_put;
>> +                goto cleanup_job;
>>               }
>>                 ret = aie2_populate_range(abo);
>>               if (ret)
>> -                goto suspend_put;
>> +                goto cleanup_job;
>>               goto retry;
>>           }
>>       }
>> @@ -1050,8 +1051,6 @@ int aie2_cmd_submit(struct amdxdna_hwctx 
>> *hwctx, struct amdxdna_sched_job *job,
>>         return 0;
>>   -suspend_put:
>> -    amdxdna_pm_suspend_put(xdna);
>>   cleanup_job:
>>       drm_sched_job_cleanup(&job->base);
>>   free_chain:
>

      reply	other threads:[~2026-02-04 21:15 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-04 17:11 [PATCH V1] accel/amdxdna: Move RPM resume into job run function Lizhi Hou
2026-02-04 18:07 ` Mario Limonciello
2026-02-04 21:15   ` Lizhi Hou [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=815ff6c8-c0a2-fe72-e159-2ff5f6124730@amd.com \
    --to=lizhi.hou@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.falkowski@linux.intel.com \
    --cc=mario.limonciello@amd.com \
    --cc=max.zhen@amd.com \
    --cc=ogabbay@kernel.org \
    --cc=quic_jhugo@quicinc.com \
    --cc=sonal.santan@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox