public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Tauro, Riana" <riana.tauro@intel.com>
To: Anshuman Gupta <anshuman.gupta@intel.com>
Cc: <intel-xe@lists.freedesktop.org>, <rodrigo.vivi@intel.com>,
	<aravind.iddamsetty@linux.intel.com>, <badal.nilawar@intel.com>,
	<raag.jadav@intel.com>, <ravi.kishore.koppuravuri@intel.com>,
	<mallesh.koujalagi@intel.com>, <soham.purkait@intel.com>,
	Matthew Brost <matthew.brost@intel.com>,
	Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
Subject: Re: [PATCH v4 04/13] drm/xe: Skip device access during PCI error recovery
Date: Wed, 6 May 2026 21:11:12 +0530	[thread overview]
Message-ID: <50cfbb65-2546-4106-92d5-160b87d3317e@intel.com> (raw)
In-Reply-To: <vtkxftaqr2aaiv7pjo5nxiv6f3eataby3qfpildmz3jko4fhoh@oxuppfnttdzv>


On 4/30/2026 6:28 PM, Anshuman Gupta wrote:
> On 2026-04-17 at 14:28:16 +0530, Riana Tauro wrote:
>> When a fatal error occurs and the error_detected callback is
>> invoked the device is inaccessible. The error_detected callback
>> wedges the device causing the jobs to timeout.
>>
>> The timedout handler acquires forcewake to dump devcoredump and
>> triggers a GT reset. Since the device is inacessible this causes
>> errors. Skip all mmio accesses and gt reset when the device
>> is in recovery.
>>
>> Cc: Matthew Brost <matthew.brost@intel.com>
>> Cc: Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
>> Signed-off-by: Riana Tauro <riana.tauro@intel.com>
>> ---
>> v2: add check in worker (Mallesh)
>> ---
>>   drivers/gpu/drm/xe/xe_gt.c         | 14 +++++++++++---
>>   drivers/gpu/drm/xe/xe_guc_submit.c |  9 +++++----
>>   2 files changed, 16 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
>> index 8a31c963c372..5ea5524d83af 100644
>> --- a/drivers/gpu/drm/xe/xe_gt.c
>> +++ b/drivers/gpu/drm/xe/xe_gt.c
>> @@ -917,6 +917,9 @@ static void gt_reset_worker(struct work_struct *w)
>>   	if (xe_device_wedged(gt_to_xe(gt)))
>>   		goto err_pm_put;
>>   
>> +	if (xe_device_is_in_recovery(gt_to_xe(gt)))
>> +		goto err_pm_put;
>> +
>>   	/* We only support GT resets with GuC submission */
>>   	if (!xe_device_uc_enabled(gt_to_xe(gt)))
>>   		goto err_pm_put;
>> @@ -977,18 +980,23 @@ static void gt_reset_worker(struct work_struct *w)
>>   
>>   void xe_gt_reset_async(struct xe_gt *gt)
>>   {
>> -	xe_gt_info(gt, "trying reset from %ps\n", __builtin_return_address(0));
>> +	struct xe_device *xe = gt_to_xe(gt);
>> +
>> +	if (xe_device_is_in_recovery(xe))
>> +		return;
> How is this synchronize with xe_device_set_in_recovery() a mid flight
> reset can still hit the device which already passed the check
> xe_device_is_in_recovery() check.


There might be a race when gt reset is already in progress and there is 
an AER.
Since this is an error path, some -ECANCELLED errors should be 
acceptable. Currently we
see a similar behaviour if device gets wedged and gt reset is in progress.

A better approach would be to cancel all workers which will be done in
[PATCH v6 0/8] Introduce Xe PCIe FLR - Raag Jadav 
<https://lore.kernel.org/intel-xe/20260423100017.1051587-1-raag.jadav@intel.com/> . 
This series will be integrated with
existing PCI error handlers once it is merged. Will add a TODO in the 
cover letter
and the first patch.

A similar TODO exists in the FLR series "TODO: Add PCIe error handling 
callbacks using similar flow."

Thanks
Riana

>
> Thanks,
> Anshuman
>>   
>>   	/* Don't do a reset while one is already in flight */
>>   	if (!xe_fault_inject_gt_reset() && xe_uc_reset_prepare(&gt->uc))
>>   		return;
>>   
>> +	xe_gt_info(gt, "trying reset from %ps\n", __builtin_return_address(0));
>> +
>>   	xe_gt_info(gt, "reset queued\n");
>>   
>>   	/* Pair with put in gt_reset_worker() if work is enqueued */
>> -	xe_pm_runtime_get_noresume(gt_to_xe(gt));
>> +	xe_pm_runtime_get_noresume(xe);
>>   	if (!queue_work(gt->ordered_wq, &gt->reset.worker))
>> -		xe_pm_runtime_put(gt_to_xe(gt));
>> +		xe_pm_runtime_put(xe);
>>   }
>>   
>>   void xe_gt_suspend_prepare(struct xe_gt *gt)
>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>> index 10556156eaad..1f32fb14a5c1 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>> @@ -1522,7 +1522,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>>   	 * If devcoredump not captured and GuC capture for the job is not ready
>>   	 * do manual capture first and decide later if we need to use it
>>   	 */
>> -	if (!exec_queue_killed(q) && !xe->devcoredump.captured &&
>> +	if (!xe_device_is_in_recovery(xe) && !exec_queue_killed(q) && !xe->devcoredump.captured &&
>>   	    !xe_guc_capture_get_matching_and_lock(q)) {
>>   		/* take force wake before engine register manual capture */
>>   		CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
>> @@ -1544,8 +1544,8 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>>   	set_exec_queue_banned(q);
>>   
>>   	/* Kick job / queue off hardware */
>> -	if (!wedged && (exec_queue_enabled(primary) ||
>> -			exec_queue_pending_disable(primary))) {
>> +	if (!xe_device_is_in_recovery(xe) && !wedged &&
>> +	    (exec_queue_enabled(primary) || exec_queue_pending_disable(primary))) {
>>   		int ret;
>>   
>>   		if (exec_queue_reset(primary))
>> @@ -1613,7 +1613,8 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
>>   
>>   	trace_xe_sched_job_timedout(job);
>>   
>> -	if (!exec_queue_killed(q))
>> +	/* Do not access device if in recovery */
>> +	if (!xe_device_is_in_recovery(xe) && !exec_queue_killed(q))
>>   		xe_devcoredump(q, job,
>>   			       "Timedout job - seqno=%u, lrc_seqno=%u, guc_id=%d, flags=0x%lx",
>>   			       xe_sched_job_seqno(job), xe_sched_job_lrc_seqno(job),
>> -- 
>> 2.47.1
>>

  reply	other threads:[~2026-05-06 15:41 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-17  8:58 [PATCH v4 00/13] Introduce Xe Uncorrectable Error Handling Riana Tauro
2026-04-17  8:58 ` [PATCH v4 01/13] drm/xe/xe_survivability: Decouple survivability info from boot survivability Riana Tauro
2026-04-17  8:58 ` [PATCH v4 02/13] drm/xe/xe_pci_error: Implement PCI error recovery callbacks Riana Tauro
2026-04-27  6:35   ` Raag Jadav
2026-05-06 13:59     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 03/13] drm/xe/xe_pci_error: Group all devres to release them on PCIe slot reset Riana Tauro
2026-04-17  8:58 ` [PATCH v4 04/13] drm/xe: Skip device access during PCI error recovery Riana Tauro
2026-04-30 12:58   ` Anshuman Gupta
2026-05-06 15:41     ` Tauro, Riana [this message]
2026-04-17  8:58 ` [PATCH v4 05/13] drm/xe/xe_ras: Initialize Uncorrectable AER Registers Riana Tauro
2026-04-27  7:56   ` Raag Jadav
2026-05-05  5:22     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 06/13] drm/xe/xe_ras: Add basic structures and commands for uncorrectable errors Riana Tauro
2026-04-17 17:38   ` Matt Roper
2026-04-17 21:25     ` Jadav, Raag
2026-04-17 21:32       ` Matt Roper
2026-04-20  5:34         ` Tauro, Riana
2026-04-20  7:49           ` Raag Jadav
2026-04-17  8:58 ` [PATCH v4 07/13] drm/xe/xe_ras: Add support for uncorrectable core-compute errors Riana Tauro
2026-04-27  8:24   ` Raag Jadav
2026-05-05  5:28     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 08/13] drm/xe/xe_ras: Handle uncorrectable SoC Internal errors Riana Tauro
2026-04-17  8:58 ` [PATCH v4 09/13] drm/xe/xe_ras: Handle uncorrectable device memory errors Riana Tauro
2026-04-21  6:08   ` Upadhyay, Tejas
2026-05-05  5:03     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 10/13] drm/xe/xe_ras: Add support to offline/decline a page Riana Tauro
2026-04-21  6:21   ` Upadhyay, Tejas
2026-05-05  5:16     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 11/13] drm/xe/xe_ras: Add support for page offline list and queue commands Riana Tauro
2026-04-21  6:19   ` Upadhyay, Tejas
2026-05-05  5:08     ` Tauro, Riana
2026-04-21  9:10   ` Upadhyay, Tejas
2026-05-05  5:17     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 12/13] drm/xe/xe_ras: Query errors from system controller on probe Riana Tauro
2026-04-28 11:46   ` Raag Jadav
2026-05-05 13:50     ` Tauro, Riana
2026-04-17  8:58 ` [PATCH v4 13/13] drm/xe/xe_pci_error: Process errors in mmio_enabled Riana Tauro
2026-04-28 11:39   ` Raag Jadav
2026-05-05  5:31     ` Tauro, Riana
2026-04-30 11:15   ` Gupta, Anshuman
2026-05-02 17:55     ` Raag Jadav
2026-04-20 13:33 ` ✗ CI.checkpatch: warning for Introduce Xe Uncorrectable Error Handling (rev4) Patchwork
2026-04-20 13:35 ` ✓ CI.KUnit: success " Patchwork
2026-04-20 14:42 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-20 17:14 ` ✗ Xe.CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=50cfbb65-2546-4106-92d5-160b87d3317e@intel.com \
    --to=riana.tauro@intel.com \
    --cc=anshuman.gupta@intel.com \
    --cc=aravind.iddamsetty@linux.intel.com \
    --cc=badal.nilawar@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=mallesh.koujalagi@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=raag.jadav@intel.com \
    --cc=ravi.kishore.koppuravuri@intel.com \
    --cc=rodrigo.vivi@intel.com \
    --cc=soham.purkait@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox