public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Lis, Tomasz" <tomasz.lis@intel.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org,
	"Michał Winiarski" <michal.winiarski@intel.com>,
	"Michał Wajdeczko" <michal.wajdeczko@intel.com>,
	"Piotr Piórkowski" <piotr.piorkowski@intel.com>,
	"Lucas De Marchi" <lucas.demarchi@intel.com>
Subject: Re: [PATCH v1 2/4] drm/xe/vf: Avoid LRC being freed while applying fixups
Date: Tue, 10 Feb 2026 21:16:19 +0100	[thread overview]
Message-ID: <21fd764b-8eb9-4364-8fff-107e0121fdc8@intel.com> (raw)
In-Reply-To: <aYYpBk1h6Fpjy+yN@lstrano-desk.jf.intel.com>

[-- Attachment #1: Type: text/plain, Size: 2424 bytes --]


On 2/6/2026 6:46 PM, Matthew Brost wrote:
> On Fri, Feb 06, 2026 at 03:53:32PM +0100, Tomasz Lis wrote:
>> There is a small but non-zero chance that fixups are running on
>> a context during teardown. The chances are decreased by starting
>> the teardown by releasing guc_id, but remain non-zero.
>> On the other hand the sync between fixups and context creation
>> drastically increases chance for such parallel teardown if
>> context creation fails.
>>
> I don't think this is possible as xe_exec_queue_contexts_hwsp_rebase is
> done under the submission_state.lock + being present in
> submission_state.exec_queue_lookup. The removal is done in q->ops->fini
> and the xe_lrc_put(s) should run after this step.
>
> This is weakly documented, and should be improved, but I don't see the
> problem. I am missing something here?

Even if we use q->ops->fini and then xe_lrc_put(s), this does not remove 
the possibility

that recovery will get the reference before freeing guc_id and continue 
using it while

LRC are getting freed on final put(). While we're protecting that by a 
lock in most places,

we're not doing it in exec queue creation error path.

If a VM is running low on resources, the post-migration recovery is 
definitely a place which

would quickly consume any remaining VRAM due to HW being stopped, so 
reaching the

error path is more likely in this situation.


Lets see if the idea of late LRC fixups pans out, maybe this discussion 
won't be relevant.

-Tomasz

> Matt
>
>> Prevent LRC teardown in parallel with fixups by getting a reference.
>>
>> Signed-off-by: Tomasz Lis<tomasz.lis@intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_exec_queue.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
>> index 42849be46166..e9396ad3390a 100644
>> --- a/drivers/gpu/drm/xe/xe_exec_queue.c
>> +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
>> @@ -1669,10 +1669,11 @@ int xe_exec_queue_contexts_hwsp_rebase(struct xe_exec_queue *q, void *scratch)
>>   		lrc = READ_ONCE(q->lrc[i]);
>>   		if (!lrc)
>>   			continue;
>> -
>> +		xe_lrc_get(lrc);
>>   		xe_lrc_update_memirq_regs_with_address(lrc, q->hwe, scratch);
>>   		xe_lrc_update_hwctx_regs_with_address(lrc);
>>   		err = xe_lrc_setup_wa_bb_with_scratch(lrc, q->hwe, scratch);
>> +		xe_lrc_put(lrc);
>>   		if (err)
>>   			break;
>>   	}
>> -- 
>> 2.25.1
>>

[-- Attachment #2: Type: text/html, Size: 3451 bytes --]

  reply	other threads:[~2026-02-10 20:16 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-02-06 14:53 [PATCH v1 0/4] drm/xe/vf: Fix exec queue creation during post-migration recovery Tomasz Lis
2026-02-06 14:53 ` [PATCH v1 1/4] drm/xe/queue: Call fini on exec queue creation fail Tomasz Lis
2026-02-06 17:38   ` Matthew Brost
2026-02-06 14:53 ` [PATCH v1 2/4] drm/xe/vf: Avoid LRC being freed while applying fixups Tomasz Lis
2026-02-06 17:46   ` Matthew Brost
2026-02-10 20:16     ` Lis, Tomasz [this message]
2026-02-06 14:53 ` [PATCH v1 3/4] drm/xe/vf: Wait for default LRCs fixups before using Tomasz Lis
2026-02-06 18:11   ` Matthew Brost
2026-02-10 20:11     ` Lis, Tomasz
2026-02-18 23:15       ` Lis, Tomasz
2026-02-06 14:53 ` [PATCH v1 4/4] drm/xe/vf: Redo LRC creation while in VF fixups Tomasz Lis
2026-02-06 14:56 ` ✓ CI.KUnit: success for drm/xe/vf: Fix exec queue creation during post-migration recovery Patchwork
2026-02-06 15:29 ` ✓ Xe.CI.BAT: " Patchwork
2026-02-07 15:42 ` ✗ Xe.CI.FULL: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=21fd764b-8eb9-4364-8fff-107e0121fdc8@intel.com \
    --to=tomasz.lis@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=michal.wajdeczko@intel.com \
    --cc=michal.winiarski@intel.com \
    --cc=piotr.piorkowski@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox