Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	simona.vetter@ffwll.ch, thomas.hellstrom@linux.intel.com,
	pstanner@redhat.com, boris.brezillon@collabora.com,
	airlied@gmail.com, ltuikov89@gmail.com, dakr@kernel.org,
	mihail.atanassov@arm.com, steven.price@arm.com,
	shashank.sharma@amd.com
Subject: Re: [RFC PATCH 0/6] Common preempt fences and semantics
Date: Wed, 13 Nov 2024 10:02:12 +0100	[thread overview]
Message-ID: <ac5b9c6e-027a-40e2-bdab-2cc5e10067d6@amd.com> (raw)
In-Reply-To: <ZzQPYocTEvnJVgQ1@lstrano-desk.jf.intel.com>

[-- Attachment #1: Type: text/plain, Size: 2362 bytes --]

Am 13.11.24 um 03:30 schrieb Matthew Brost:
> [SNIP]
>>>> If you're using gpuvm, just call drm_gpuvm_resv_add_fence. I assume AMD has a
>>>> similarly simple call.
>>> Nope, we try to avoid locking all BOs in the VM as hard as we can.
>>>
>> Why? Calling in to perform fence conversion shouldn't be all that
>> frequent and simplifies things.
>>
>> Also, it's likely that only a few locks are involved, as not too many
>> external BOs are mapped within a VM (i.e., most BOs share the VM's
>> dma-resv lock).

The most common use case are multi GPU systems which share a lot of data 
in a NUMA cluster.

This configuration has almost all BOs shared between GPUs making locking 
the whole VM a task with massive overhead which should be avoided as 
much as possible.

>>>> Now the ordering works inherently in dma-resv and the scheduler. e.g. No
>>>> need to track the last completion fences explictly in preempt fences.
>>> I really don't think that this is a good approach. Explicitly keeping the
>>> last completion fence in the pre-emption fence is basically a must have as
>>> far as I can see.
>>>
>>> The approach you take here looks like a really ugly hack to me.
>>>
>> Well, I have to disagree; it seems like a pretty solid, common design.

What you basically do is to move the responsibility to signal fences in 
the right order from the provider of the fences to the consumer of it.

Since we have tons of consumers of that stuff this is not even remotely 
a defensive design.

>>
>> Anyway, I think I have this more or less working. I want to run this by
>> the Mesa team a bit to ensure I haven't missed anything, and will likely
>> post something shortly after.
>>
>> We can discuss this more after I post and perhaps solicit other
>> opinions, weighing the pros and cons of the approaches here. I do think
>> they function roughly the same, so something commonly agreed upon would
>> be good. Sharing a bit of code, if possible, is always a plus too.

Well to make it clear that will never ever get a green light from my 
side as DMA-buf maintainer. What you suggest here is extremely fragile.

Why not simply wait for the pending completion fences as dependency for 
signaling preemption fences?

That should work for all drivers and is trivial to implement as far as I 
can see.

Regards,
Christian.

>>
>> Matt
>>
>>> Regards,
>>> Christian.
>>>

[-- Attachment #2: Type: text/html, Size: 4237 bytes --]

  reply	other threads:[~2024-11-13  9:02 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-09 17:29 [RFC PATCH 0/6] Common preempt fences and semantics Matthew Brost
2024-11-09 17:29 ` [RFC PATCH 1/6] dma-resv: Add DMA_RESV_USAGE_PREEMPT Matthew Brost
2024-11-09 17:29 ` [RFC PATCH 2/6] drm/sched: Teach scheduler about DMA_RESV_USAGE_PREEMPT Matthew Brost
2024-11-12  9:06   ` Philipp Stanner
2024-11-12 20:08     ` Matthew Brost
2024-11-13 11:03       ` Philipp Stanner
2024-11-09 17:29 ` [RFC PATCH 3/6] dma-fence: Add dma_fence_preempt base class Matthew Brost
2024-11-09 17:29 ` [RFC PATCH 4/6] drm/sched: Teach scheduler about dma_fence_prempt type Matthew Brost
2024-11-09 17:29 ` [RFC PATCH 5/6] drm/xe: Use DMA_RESV_USAGE_PREEMPT for preempt fences Matthew Brost
2024-11-09 17:29 ` [RFC PATCH 6/6] drm/xe: Use dma_fence_preempt base class Matthew Brost
2024-11-09 17:35 ` ✓ CI.Patch_applied: success for Common preempt fences and semantics Patchwork
2024-11-09 17:35 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-09 17:36 ` ✓ CI.KUnit: success " Patchwork
2024-11-09 17:48 ` ✓ CI.Build: " Patchwork
2024-11-09 17:50 ` ✓ CI.Hooks: " Patchwork
2024-11-09 17:51 ` ✗ CI.checksparse: warning " Patchwork
2024-11-09 18:16 ` ✓ CI.BAT: success " Patchwork
2024-11-10  8:13 ` ✗ CI.FULL: failure " Patchwork
2024-11-11 13:42 ` [RFC PATCH 0/6] " Christian König
2024-11-12  3:29   ` Matthew Brost
2024-11-12 11:09     ` Christian König
2024-11-13  2:27       ` Matthew Brost
2024-11-13  2:30         ` Matthew Brost
2024-11-13  9:02           ` Christian König [this message]
2024-11-13 15:34             ` Matthew Brost
2024-11-14  8:38               ` Christian König
2024-11-15 19:38                 ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ac5b9c6e-027a-40e2-bdab-2cc5e10067d6@amd.com \
    --to=christian.koenig@amd.com \
    --cc=airlied@gmail.com \
    --cc=boris.brezillon@collabora.com \
    --cc=dakr@kernel.org \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=ltuikov89@gmail.com \
    --cc=matthew.brost@intel.com \
    --cc=mihail.atanassov@arm.com \
    --cc=pstanner@redhat.com \
    --cc=shashank.sharma@amd.com \
    --cc=simona.vetter@ffwll.ch \
    --cc=steven.price@arm.com \
    --cc=thomas.hellstrom@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox