Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Auld <matthew.auld@intel.com>
To: "Zhang, Carl" <carl.zhang@intel.com>,
	"Brost, Matthew" <matthew.brost@intel.com>
Cc: "Lucas De Marchi" <lucas.demarchi@intel.com>,
	"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
	"Souza, Jose" <jose.souza@intel.com>,
	"Mrozek, Michal" <michal.mrozek@intel.com>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>
Subject: Re: [PATCH v3 1/2] drm/xe/uapi: disallow bind queue sharing
Date: Mon, 19 Jan 2026 12:37:36 +0000	[thread overview]
Message-ID: <598d3899-5942-485b-8e76-61bcbdfa5cbe@intel.com> (raw)
In-Reply-To: <PH0PR11MB55798387B824D4101E19545E87A9A@PH0PR11MB5579.namprd11.prod.outlook.com>

On 19/12/2025 06:36, Zhang, Carl wrote:
> 
> 
>> -----Original Message-----
>> From: Brost, Matthew <matthew.brost@intel.com>
>> Sent: Friday, December 19, 2025 4:41 AM
>> To: Auld, Matthew <matthew.auld@intel.com>
>> Cc: Lucas De Marchi <lucas.demarchi@intel.com>; intel-
>> xe@lists.freedesktop.org; Thomas Hellström
>> <thomas.hellstrom@linux.intel.com>; Souza, Jose <jose.souza@intel.com>;
>> Mrozek, Michal <michal.mrozek@intel.com>; Zhang, Carl
>> <carl.zhang@intel.com>; stable@vger.kernel.org
>> Subject: Re: [PATCH v3 1/2] drm/xe/uapi: disallow bind queue sharing
>>
>> On Mon, Nov 24, 2025 at 01:41:55PM +0000, Matthew Auld wrote:
>>> On 20/11/2025 15:34, Lucas De Marchi wrote:
>>>> On Thu, Nov 20, 2025 at 01:27:29PM +0000, Matthew Auld wrote:
>>>>> Currently this is very broken if someone attempts to create a bind
>>>>> queue and share it across multiple VMs. For example currently we
>>>>> assume it is safe to acquire the user VM lock to protect some of
>>>>> the bind queue state, but if allow sharing the bind queue with
>>>>> multiple VMs then this quickly breaks down.
>>>>>
>>>>> To fix this reject using a bind queue with any VM that is not the
>>>>> same VM that was originally passed when creating the bind queue.
>>>>> This a uAPI change, however this was more of an oversight on
>>>>> kernel side that we didn't reject this, and expectation is that
>>>>> userspace shouldn't be using bind queues in this way, so in theory this
>> change should go unnoticed.
>>>>>
>>>>> Based on a patch from Matt Brost.
>>>>>
>>>>> v2 (Matt B):
>>>>>   - Hold the vm lock over queue create, to ensure it can't be
>>>>> closed as
>>>>>     we attach the user_vm to the queue.
>>>>>   - Make sure we actually check for NULL user_vm in destruction path.
>>>>> v3:
>>>>>   - Fix error path handling.
>>>>>
>>>>> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
>>>>> GPUs")
>>>>> Reported-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
>>>>> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>>>>> Cc: José Roberto de Souza <jose.souza@intel.com>
>>>>> Cc: Matthew Brost <matthew.brost@intel.com>
>>>>> Cc: Michal Mrozek <michal.mrozek@intel.com>
>>>>> Cc: Carl Zhang <carl.zhang@intel.com>
>>>>> Cc: <stable@vger.kernel.org> # v6.8+
>>>>
>>>> we never had any platform officially supported back in 6.8. Let's
>>>> make it 6.12 to avoid useless backporting work.
>>>>
>>>>> Acked-by: José Roberto de Souza <jose.souza@intel.com>
>>>>
>>>> Michal / Carl, can you also ack compute/media are ok with this change?
>>>
> I am ok , current media driver only use default 0,  did not create bind exec queue .

Thanks for confirming.

Michal, ping on this from compute POV?

> 
>>> Ping on this? I did a cursory grep for DRM_XE_ENGINE_CLASS_VM_BIND and
>>> found no users in compute-runtime or media-driver in upstream. This
>>> change should only be noticeable if you directly use
>>> DRM_XE_ENGINE_CLASS_VM_BIND to create a dedicated bind queue, which
>> you then pass into vm_bind.
>>>
>>
>> Yes, ping? It would be good to get this series in.
>>
>> Matt
>>
>>>>
>>>> Lucas De Marchi
>>>
> Thanks
> Carl


  reply	other threads:[~2026-01-19 12:37 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-20 13:27 [PATCH v3 0/2] Some bind queue fixes Matthew Auld
2025-11-20 13:27 ` [PATCH v3 1/2] drm/xe/uapi: disallow bind queue sharing Matthew Auld
2025-11-20 15:06   ` Matthew Brost
2025-11-20 15:47     ` Yadav, Arvind
2025-11-20 15:34   ` Lucas De Marchi
2025-11-24 13:41     ` Matthew Auld
2025-12-18 20:40       ` Matthew Brost
2025-12-19  6:36         ` Zhang, Carl
2026-01-19 12:37           ` Matthew Auld [this message]
2026-01-19 14:14             ` Mrozek, Michal
2026-01-19 15:09               ` Matthew Auld
2025-11-20 13:27 ` [PATCH v3 2/2] drm/xe/migrate: fix job lock assert Matthew Auld
2025-11-20 15:30   ` Yadav, Arvind
2025-11-20 13:52 ` ✗ CI.checkpatch: warning for Some bind queue fixes (rev3) Patchwork
2025-11-20 13:53 ` ✓ CI.KUnit: success " Patchwork
2025-11-20 14:44 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-20 17:38 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=598d3899-5942-485b-8e76-61bcbdfa5cbe@intel.com \
    --to=matthew.auld@intel.com \
    --cc=carl.zhang@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jose.souza@intel.com \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=michal.mrozek@intel.com \
    --cc=stable@vger.kernel.org \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox