Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v4 08/34] drm/xe: Don't change LRC ring head on job resubmission
Date: Mon, 6 Oct 2025 09:59:30 +0100	[thread overview]
Message-ID: <efa2dc2b-c143-474d-b3c8-4d78a2137ee9@intel.com> (raw)
In-Reply-To: <aOIV4E8WqXnGF4AU@lstrano-desk.jf.intel.com>

On 05/10/2025 07:53, Matthew Brost wrote:
> On Sat, Oct 04, 2025 at 10:25:34PM -0700, Matthew Brost wrote:
>> On Thu, Oct 02, 2025 at 03:15:13PM +0100, Matthew Auld wrote:
>>> On 02/10/2025 06:53, Matthew Brost wrote:
>>>> Now that we save the job's head during submission, it's no longer
>>>> necessary to adjust the LRC ring head during resubmission. Instead, a
>>>> software-based adjustment of the tail will overwrite the old jobs in
>>>> place. For some odd reason, adjusting the LRC ring head didn't work on
>>>> parallel queues, which was causing issues in our CI.
>>>>
>>>> v6:
>>>>    - Also set LRC tail to head so queue is idle coming out of reset
>>>>
>>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>>> Reviewed-by: Tomasz Lis <tomasz.lis@intel.com>
>>>> ---
>>>>    drivers/gpu/drm/xe/xe_guc_submit.c | 10 ++++++++--
>>>>    1 file changed, 8 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
>>>> index 3a534d93505f..70306f902ba5 100644
>>>> --- a/drivers/gpu/drm/xe/xe_guc_submit.c
>>>> +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
>>>> @@ -2008,11 +2008,17 @@ static void guc_exec_queue_start(struct xe_exec_queue *q)
>>>>    	struct xe_gpu_scheduler *sched = &q->guc->sched;
>>>>    	if (!exec_queue_killed_or_banned_or_wedged(q)) {
>>>> +		struct xe_sched_job *job = xe_sched_first_pending_job(sched);
>>>>    		int i;
>>>>    		trace_xe_exec_queue_resubmit(q);
>>>> -		for (i = 0; i < q->width; ++i)
>>>> -			xe_lrc_set_ring_head(q->lrc[i], q->lrc[i]->ring.tail);
>>>> +		if (job) {
>>>> +			for (i = 0; i < q->width; ++i) {
>>>> +				q->lrc[i]->ring.tail = job->ptrs[i].head;
>>>> +				xe_lrc_set_ring_tail(q->lrc[i],
>>>> +						     xe_lrc_ring_head(q->lrc[i]));
>>>
>>> IIRC the sched pending_list stuff can also give back pending jobs that have
>>> completed on the hw, but are still kept pending until the final free_job()?
>>>
>>> Suppose we have a pending_list like:
>>>
>>> [pending/complete, pending/complete, actual pending kernel job that never
>>> completed/ran]
>>>
>>> IIUC the sw ring.tail will actually go backwards to the first pending
>>> free/complete job head in the pending_list, with the hw tail being reset to
>>> the current hw head here. But on the next submit the sw ring.tail is where
>>> the commands are emitted to, and on the next update
>>> of the hw tail it will be synced to the sw ring.tail? But if that happens
>>> won't we get hw tail < hw head (since we used the head of an already
>>> complete job for the sw tail), which will make the hw think there is a
>>> massive ring wrap, so it will execute garbage until it wraps back around to
>>> tail?
>>>
>>
>> Let me tweak this flow to use the skip_emit / last_replay flow
>> introduced later in series to avoid this issue.
>>
> 
> Actually this flow works just fine. The GuC state is completely lost
> during this flow - the context is not even registered. By the time
> contect is registered - the LRC head will be at the original postition
> of the first pending job and LRC tail will be at the end of the first
> job.

Can you share some more info here on the flow? I'm seeing LRC hw head 
being at the correct position, which must have moved past anything 
already completed by the hw, right? But here the sw lrc tail is being 
potentially moved backwards to the head of something already complete 
(the job we pick is pending but only because the free_job() has not run 
yet, so the job has already signalled/ran). Below when we do something 
like xe_sched_resubmit_jobs() the hw tail is then updated to the sw 
tail, but now we end up with hw tail < hw head?

Also the flow I'm thinking about here is forced suspend/resume.

> 
> Matt
> 
>> Matt
>>
>>>> +			}
>>>> +		}
>>>>    		xe_sched_resubmit_jobs(sched);
>>>>    	}
>>>


  reply	other threads:[~2025-10-06  8:59 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-02  5:53 [PATCH v4 00/34] VF migration redesign Matthew Brost
2025-10-02  5:53 ` [PATCH v4 01/34] drm/xe: Add NULL checks to scratch LRC allocation Matthew Brost
2025-10-02 22:02   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 02/34] Revert "drm/xe/vf: Rebase exec queue parallel commands during migration recovery" Matthew Brost
2025-10-02  5:53 ` [PATCH v4 03/34] Revert "drm/xe/vf: Post migration, repopulate ring area for pending request" Matthew Brost
2025-10-02  5:53 ` [PATCH v4 04/34] Revert "drm/xe/vf: Fixup CTB send buffer messages after migration" Matthew Brost
2025-10-02  5:53 ` [PATCH v4 05/34] drm/xe: Save off position in ring in which a job was programmed Matthew Brost
2025-10-02  5:53 ` [PATCH v4 06/34] drm/xe/guc: Track pending-enable source in submission state Matthew Brost
2025-10-02  5:53 ` [PATCH v4 07/34] drm/xe: Track LR jobs in DRM scheduler pending list Matthew Brost
2025-10-02 16:14   ` Matthew Auld
2025-10-05  5:21     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 08/34] drm/xe: Don't change LRC ring head on job resubmission Matthew Brost
2025-10-02 14:15   ` Matthew Auld
2025-10-05  5:25     ` Matthew Brost
2025-10-05  6:53       ` Matthew Brost
2025-10-06  8:59         ` Matthew Auld [this message]
2025-10-02  5:53 ` [PATCH v4 09/34] drm/xe: Make LRC W/A scratch buffer usage consistent Matthew Brost
2025-10-02  5:53 ` [PATCH v4 10/34] drm/xe/guc: Document GuC submission backend Matthew Brost
2025-10-03 14:30   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 11/34] drm/xe/vf: Add xe_gt_recovery_inprogress helper Matthew Brost
2025-10-03  1:39   ` Lis, Tomasz
2025-10-04  4:32     ` Matthew Brost
2025-10-03  8:40   ` Michal Wajdeczko
2025-10-04  4:32     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 12/34] drm/xe/vf: Make VF recovery run on per-GT worker Matthew Brost
2025-10-02  5:53 ` [PATCH v4 13/34] drm/xe/vf: Abort H2G sends during VF post-migration recovery Matthew Brost
2025-10-02  5:53 ` [PATCH v4 14/34] drm/xe/vf: Remove memory allocations from VF post migration recovery Matthew Brost
2025-10-02  5:53 ` [PATCH v4 15/34] drm/xe/vf: Close multi-GT GGTT shift race Matthew Brost
2025-10-03 14:24   ` Michal Wajdeczko
2025-10-04  4:36     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 16/34] drm/xe/vf: Teardown VF post migration worker on driver unload Matthew Brost
2025-10-02  5:53 ` [PATCH v4 17/34] drm/xe/vf: Don't allow GT reset to be queued during VF post migration recovery Matthew Brost
2025-10-03 16:09   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 18/34] drm/xe/vf: Wakeup in GuC backend on " Matthew Brost
2025-10-03 14:38   ` Michal Wajdeczko
2025-10-05  6:22     ` Matthew Brost
2025-10-05  6:35       ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 19/34] drm/xe/vf: Avoid indefinite blocking in preempt rebind worker for VFs supporting migration Matthew Brost
2025-10-02  5:53 ` [PATCH v4 20/34] drm/xe/vf: Use GUC_HXG_TYPE_EVENT for GuC context register Matthew Brost
2025-10-03 14:26   ` Lis, Tomasz
2025-10-05  5:43     ` Matthew Brost
2025-10-03 14:57   ` Michal Wajdeczko
2025-10-02  5:53 ` [PATCH v4 21/34] drm/xe/vf: Flush and stop CTs in VF post migration recovery Matthew Brost
2025-10-02  5:53 ` [PATCH v4 22/34] drm/xe/vf: Reset TLB invalidations during " Matthew Brost
2025-10-02  5:53 ` [PATCH v4 23/34] drm/xe/vf: Kickstart after resfix in " Matthew Brost
2025-10-02  5:53 ` [PATCH v4 24/34] drm/xe/vf: Start CTs before resfix " Matthew Brost
2025-10-02 21:50   ` Lis, Tomasz
2025-10-03 15:10   ` Michal Wajdeczko
2025-10-05  6:49     ` Matthew Brost
2025-10-05 12:28       ` Michal Wajdeczko
2025-10-02  5:53 ` [PATCH v4 25/34] drm/xe/vf: Abort VF post migration recovery on failure Matthew Brost
2025-10-02  5:53 ` [PATCH v4 26/34] drm/xe/vf: Replay GuC submission state on pause / unpause Matthew Brost
2025-10-02  5:53 ` [PATCH v4 27/34] drm/xe: Move queue init before LRC creation Matthew Brost
2025-10-03 13:25   ` Lis, Tomasz
2025-10-05  8:03     ` Matthew Brost
2025-10-02  5:53 ` [PATCH v4 28/34] drm/xe/vf: Add debug prints for GuC replaying state during VF recovery Matthew Brost
2025-10-03 13:08   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 29/34] drm/xe/vf: Workaround for race condition in GuC firmware during VF pause Matthew Brost
2025-10-03 13:06   ` Lis, Tomasz
2025-10-02  5:53 ` [PATCH v4 30/34] drm/xe: Use PPGTT addresses for TLB invalidation to avoid GGTT fixups Matthew Brost
2025-10-02  5:53 ` [PATCH v4 31/34] drm/xe/vf: Use primary GT ordered work queue on media GT on PTL VF Matthew Brost
2025-10-02 21:00   ` Lis, Tomasz
2025-10-05  7:03     ` Matthew Brost
2025-10-02  5:54 ` [PATCH v4 32/34] drm/xe/vf: Ensure media GT VF recovery runs after primary GT on PTL Matthew Brost
2025-10-02 20:19   ` Lis, Tomasz
2025-10-02  5:54 ` [PATCH v4 33/34] drm/xe/vf: Rebase CCS save/restore BB GGTT addresses Matthew Brost
2025-10-02  5:54 ` [PATCH v4 34/34] drm/xe/guc: Increase wait timeout to 2sec after BUSY reply from GuC Matthew Brost
2025-10-02  6:45 ` ✗ CI.checkpatch: warning for VF migration redesign (rev4) Patchwork
2025-10-02  6:47 ` ✓ CI.KUnit: success " Patchwork
2025-10-02  7:33 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-02  9:19 ` ✗ Xe.CI.Full: " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=efa2dc2b-c143-474d-b3c8-4d78a2137ee9@intel.com \
    --to=matthew.auld@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox