Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Dong, Zhanjun" <zhanjun.dong@intel.com>
To: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>,
	<intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v2 1/1] drm/xe/gsc: Fix GSC proxy cleanup on early initialization failure
Date: Wed, 28 Jan 2026 16:52:11 -0500	[thread overview]
Message-ID: <571da6c4-93ab-404c-821a-c813650d65ea@intel.com> (raw)
In-Reply-To: <b6736da7-a430-43b5-bf48-d09a2af79af0@intel.com>



On 2026-01-27 6:32 p.m., Daniele Ceraolo Spurio wrote:
> 
> 
> On 1/26/2026 6:43 PM, Zhanjun Dong wrote:
>> xe_gsc_proxy_remove undoes what is done in both xe_gsc_proxy_init and
>> xe_gsc_proxy_start; however, if we fail between those 2 calls, it is
>> possible that the HW forcewake access hasn't been initialized yet and so
>> we hit errors when the cleanup code tries to write GSC register. To
>> avoid that, split the cleanup in 2 functions so that the HW cleanup is
>> only called if the HW setup was completed successfully.
>>
>> Additionally, fix error handling in xe_gsc_proxy_start to properly
>> disable interrupts on failure paths before returning, ensuring cleanup
>> is performed correctly when xe_gsc_proxy_request_handler() or
>> xe_gsc_proxy_init_done() fails.
> 
> This second one is not a fix. In the current behavior, if 
> xe_gsc_proxy_start fails then xe_gsc_proxy_remove is called and the 
> cleanup is performed correctly. Since you're removing that part of the 
> cleanup from xe_gsc_proxy_remove then this new change to 
> xe_gsc_proxy_start becomes necessary.
Yes, to be updated in next rev.
> 
>>
>> Fixes: ff6cd29b690b ("drm/xe: Cleanup unwind of gt initialization")
>> Signed-off-by: Zhanjun Dong <zhanjun.dong@intel.com>
>> Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>> ---
>> v2:
>> - Split cleanup into two functions: xe_gsc_proxy_remove() for SW cleanup
>>    and xe_gsc_proxy_stop() for HW cleanup that requires forcewake access.
>> - Add error handling in xe_gsc_proxy_start to disable interrupts on
>>    early error exits.
>> ---
>>   drivers/gpu/drm/xe/xe_gsc_proxy.c | 42 ++++++++++++++++++++-----------
>>   1 file changed, 28 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_gsc_proxy.c b/drivers/gpu/drm/xe/ 
>> xe_gsc_proxy.c
>> index 42438b21f235..0f62ee7dab4a 100644
>> --- a/drivers/gpu/drm/xe/xe_gsc_proxy.c
>> +++ b/drivers/gpu/drm/xe/xe_gsc_proxy.c
>> @@ -444,16 +444,6 @@ static void xe_gsc_proxy_remove(void *arg)
>>       if (!gsc->proxy.component_added)
>>           return;
>> -    /* disable HECI2 IRQs */
>> -    scoped_guard(xe_pm_runtime, xe) {
>> -        CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
>> -        if (!fw_ref.domains)
>> -            xe_gt_err(gt, "failed to get forcewake to disable GSC 
>> interrupts\n");
>> -
>> -        /* try do disable irq even if forcewake failed */
>> -        gsc_proxy_irq_toggle(gsc, false);
>> -    }
>> -
>>       xe_gsc_wait_for_worker_completion(gsc);
> 
> in v1 you had xe_gsc_wait_for_worker_completion moved to 
> xe_gsc_proxy_stop(). I think that works better, since the worker 
> shouldn't be queued anymore after we disable interrupts.
Sure, will follow this in next rev.

Regards,
Zhanjun Dong
> 
> Daniele
> 
>>       component_del(xe->drm.dev, &xe_gsc_proxy_component_ops);
>> @@ -502,6 +492,23 @@ int xe_gsc_proxy_init(struct xe_gsc *gsc)
>>       return devm_add_action_or_reset(xe->drm.dev, 
>> xe_gsc_proxy_remove, gsc);
>>   }
>> +static void xe_gsc_proxy_stop(void *arg)
>> +{
>> +    struct xe_gsc *gsc = arg;
>> +    struct xe_gt *gt = gsc_to_gt(gsc);
>> +    struct xe_device *xe = gt_to_xe(gt);
>> +
>> +    /* disable HECI2 IRQs */
>> +    scoped_guard(xe_pm_runtime, xe) {
>> +        CLASS(xe_force_wake, fw_ref)(gt_to_fw(gt), XE_FW_GSC);
>> +        if (!fw_ref.domains)
>> +            xe_gt_err(gt, "failed to get forcewake to disable GSC 
>> interrupts\n");
>> +
>> +        /* try do disable irq even if forcewake failed */
>> +        gsc_proxy_irq_toggle(gsc, false);
>> +    }
>> +}
>> +
>>   /**
>>    * xe_gsc_proxy_start() - start the proxy by submitting the first 
>> request
>>    * @gsc: the GSC uC
>> @@ -510,6 +517,8 @@ int xe_gsc_proxy_init(struct xe_gsc *gsc)
>>    */
>>   int xe_gsc_proxy_start(struct xe_gsc *gsc)
>>   {
>> +    struct xe_gt *gt = gsc_to_gt(gsc);
>> +    struct xe_device *xe = gt_to_xe(gt);
>>       int err;
>>       /* enable the proxy interrupt in the GSC shim layer */
>> @@ -521,12 +530,17 @@ int xe_gsc_proxy_start(struct xe_gsc *gsc)
>>        */
>>       err = xe_gsc_proxy_request_handler(gsc);
>>       if (err)
>> -        return err;
>> +        goto err_irq_disable;
>>       if (!xe_gsc_proxy_init_done(gsc)) {
>> -        xe_gt_err(gsc_to_gt(gsc), "GSC FW reports proxy init not 
>> completed\n");
>> -        return -EIO;
>> +        xe_gt_err(gt, "GSC FW reports proxy init not completed\n");
>> +        err = -EIO;
>> +        goto err_irq_disable;
>>       }
>> -    return 0;
>> +    return devm_add_action_or_reset(xe->drm.dev, xe_gsc_proxy_stop, 
>> gsc);
>> +
>> +err_irq_disable:
>> +    gsc_proxy_irq_toggle(gsc, false);
>> +    return err;
>>   }
> 


  reply	other threads:[~2026-01-28 21:52 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-01-27  2:43 [PATCH v2 1/1] drm/xe/gsc: Fix GSC proxy cleanup on early initialization failure Zhanjun Dong
2026-01-27  2:50 ` ✓ CI.KUnit: success for series starting with [v2,1/1] " Patchwork
2026-01-27  5:14 ` ✗ Xe.CI.Full: failure " Patchwork
2026-01-27 23:32 ` [PATCH v2 1/1] " Daniele Ceraolo Spurio
2026-01-28 21:52   ` Dong, Zhanjun [this message]
2026-01-28  8:05 ` ✗ Xe.CI.Full: failure for series starting with [v2,1/1] " Patchwork
2026-01-28 13:01 ` ✓ CI.KUnit: success for series starting with [v2,1/1] drm/xe/gsc: Fix GSC proxy cleanup on early initialization failure (rev2) Patchwork
2026-01-28 13:52 ` ✓ Xe.CI.BAT: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=571da6c4-93ab-404c-821a-c813650d65ea@intel.com \
    --to=zhanjun.dong@intel.com \
    --cc=daniele.ceraolospurio@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox