From: "Christian König" <christian.koenig@amd.com>
To: Matthew Brost <matthew.brost@intel.com>
Cc: intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
kenneth.w.graunke@intel.com, lionel.g.landwerlin@intel.com,
jose.souza@intel.com, simona.vetter@ffwll.ch,
thomas.hellstrom@linux.intel.com, boris.brezillon@collabora.com,
airlied@gmail.com, mihail.atanassov@arm.com,
steven.price@arm.com, shashank.sharma@amd.com
Subject: Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence
Date: Thu, 21 Nov 2024 10:31:16 +0100 [thread overview]
Message-ID: <01678a48-828a-400f-b989-51c497845340@amd.com> (raw)
In-Reply-To: <Zz5nrl3H2wAagwgE@lstrano-desk.jf.intel.com>
Am 20.11.24 um 23:50 schrieb Matthew Brost:
> On Wed, Nov 20, 2024 at 02:38:49PM +0100, Christian König wrote:
>> Am 19.11.24 um 00:37 schrieb Matthew Brost:
>>> Normalize user fence attachment to a DMA fence. A user fence is a simple
>>> seqno write to memory, implemented by attaching a DMA fence callback
>>> that writes out the seqno. Intended use case is importing a dma-fence
>>> into kernel and exporting a user fence.
>>>
>>> Helpers added to allocate, attach, and free a dma_fence_user_fence.
>>>
>>> Cc: Dave Airlie <airlied@redhat.com>
>>> Cc: Simona Vetter <simona.vetter@ffwll.ch>
>>> Cc: Christian Koenig <christian.koenig@amd.com>
>>> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
>>> ---
>>> drivers/dma-buf/Makefile | 2 +-
>>> drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
>>> include/linux/dma-fence-user-fence.h | 31 +++++++++++
>>> 3 files changed, 105 insertions(+), 1 deletion(-)
>>> create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
>>> create mode 100644 include/linux/dma-fence-user-fence.h
>>>
>>> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
>>> index c25500bb38b5..ba9ba339319e 100644
>>> --- a/drivers/dma-buf/Makefile
>>> +++ b/drivers/dma-buf/Makefile
>>> @@ -1,6 +1,6 @@
>>> # SPDX-License-Identifier: GPL-2.0-only
>>> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
>>> - dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
>>> + dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
>>> obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
>>> obj-$(CONFIG_DMABUF_HEAPS) += heaps/
>>> obj-$(CONFIG_SYNC_FILE) += sync_file.o
>>> diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
>>> new file mode 100644
>>> index 000000000000..5a4b289bacb8
>>> --- /dev/null
>>> +++ b/drivers/dma-buf/dma-fence-user-fence.c
>>> @@ -0,0 +1,73 @@
>>> +// SPDX-License-Identifier: MIT
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#include <linux/dma-fence-user-fence.h>
>>> +#include <linux/slab.h>
>>> +
>>> +static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
>>> +{
>>> + struct dma_fence_user_fence *user_fence =
>>> + container_of(cb, struct dma_fence_user_fence, cb);
>>> +
>>> + if (user_fence->map.is_iomem)
>>> + writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
>>> + else
>>> + *(u64 *)user_fence->map.vaddr = user_fence->seqno;
>>> +
>>> + dma_fence_user_fence_free(user_fence);
>>> +}
>>> +
>>> +/**
>>> + * dma_fence_user_fence_alloc() - Allocate user fence
>>> + *
>>> + * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
>>> + */
>>> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
>>> +{
>>> + return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_user_fence_alloc);
>>> +
>>> +/**
>>> + * dma_fence_user_fence_free() - Free user fence
>>> + *
>>> + * Free user fence. Should only be called on a user fence if
>>> + * dma_fence_user_fence_attach is not called to cleanup original allocation from
>>> + * dma_fence_user_fence_alloc.
>>> + */
>>> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
>>> +{
>>> + kfree(user_fence);
>> We need to give that child a different name, e.g. something like
>> dma_fence_seq_write or something like that.
>>
> Yea, I didn't like this name either. dma_fence_seq_write seems better.
>
>> I was just about to complain that all dma_fence implementations need to be
>> RCU save and only then saw that this isn't a dma_fence implementation.
>>
> Nope, just a helper to back a value which user space can find when a
> dma-fence signals.
>
>> Question: Why is that useful in the first place? At least AMD HW can write
>> to basically all memory locations and even registers when a fence finishes?
>>
> This could be used in a few places.
>
> 1. VM bind completion a seqno is written which user jobs can then wait
> on via ring instructions. We have something similar to this is Xe for LR
> VMs already but I don't really like that interface - it is user address
> + write value. This would be based on a BO + offset which I think makes
> a bit more sense and should perform quite a better too. I haven't wired
> this up in this series but plan to doing this.
>
> 2. Convert an input dma-fence into seqno writeback when the dma-fence
> signals. Again this seqno is something the user can wiat on via ring
> instructions.
>
> The flow here would be, a user job needs to wait on external dma-fence
> in a syncobj, syncfile, etc..., call the convert dma-fence to user fence
> IOCTL before the submission (patch 22, 28 in this series), program the
> wait via ring instructions, and then do the user submission. This would
> avoid blocking on external dma-fences in the submission path.
>
> I think this makes sense and having a light weight helper to normalize
> this flow across drivers makes a bit sense too.
Well we have pretty much the same concept, but all writes are done by
the hardware and not go by a round-trip through the CPU.
We have a read only mapped seq64 area in the kernel reserved part of the
VM address space.
Through this area the queues can see each others fence progress and we
can say things like BO mapping and TLB flush are finished when this
seq64 increases please suspend further processing until you see that.
Could be that this is useful for more than XE, but at least for AMD I
currently don't see that.
Regards,
Christian.
>
> Matt
>
>> Regards,
>> Christian.
>>
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_user_fence_free);
>>> +
>>> +/**
>>> + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
>>> + *
>>> + * @fence: fence
>>> + * @user_fence user fence
>>> + * @map: IOSYS map to write seqno to
>>> + * @seqno: seqno to write to IOSYS map
>>> + *
>>> + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
>>> + * The caller must guarantee that the memory in the IOSYS map doesn't move
>>> + * before the fence signals. This is typically done by installing the DMA fence
>>> + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
>>> + * derived.
>>> + */
>>> +void dma_fence_user_fence_attach(struct dma_fence *fence,
>>> + struct dma_fence_user_fence *user_fence,
>>> + struct iosys_map *map, u64 seqno)
>>> +{
>>> + int err;
>>> +
>>> + user_fence->map = *map;
>>> + user_fence->seqno = seqno;
>>> +
>>> + err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
>>> + if (err == -ENOENT)
>>> + user_fence_cb(NULL, &user_fence->cb);
>>> +}
>>> +EXPORT_SYMBOL(dma_fence_user_fence_attach);
>>> diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
>>> new file mode 100644
>>> index 000000000000..8678129c7d56
>>> --- /dev/null
>>> +++ b/include/linux/dma-fence-user-fence.h
>>> @@ -0,0 +1,31 @@
>>> +/* SPDX-License-Identifier: MIT */
>>> +/*
>>> + * Copyright © 2024 Intel Corporation
>>> + */
>>> +
>>> +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
>>> +#define __LINUX_DMA_FENCE_USER_FENCE_H
>>> +
>>> +#include <linux/dma-fence.h>
>>> +#include <linux/iosys-map.h>
>>> +
>>> +/** struct dma_fence_user_fence - User fence */
>>> +struct dma_fence_user_fence {
>>> + /** @cb: dma-fence callback used to attach user fence to dma-fence */
>>> + struct dma_fence_cb cb;
>>> + /** @map: IOSYS map to write seqno to */
>>> + struct iosys_map map;
>>> + /** @seqno: seqno to write to IOSYS map */
>>> + u64 seqno;
>>> +};
>>> +
>>> +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
>>> +
>>> +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
>>> +
>>> +void dma_fence_user_fence_attach(struct dma_fence *fence,
>>> + struct dma_fence_user_fence *user_fence,
>>> + struct iosys_map *map,
>>> + u64 seqno);
>>> +
>>> +#endif
next prev parent reply other threads:[~2024-11-21 9:31 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
2024-11-20 13:31 ` Christian König
2024-11-20 17:36 ` Matthew Brost
2024-11-21 10:04 ` Christian König
2024-11-21 18:41 ` Matthew Brost
2024-11-22 10:56 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence Matthew Brost
2024-11-20 13:38 ` Christian König
2024-11-20 22:50 ` Matthew Brost
2024-11-21 9:31 ` Christian König [this message]
2024-11-22 2:35 ` Matthew Brost
2024-11-22 10:28 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 03/29] drm/xe: Use dma_fence_preempt base class Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 04/29] drm/xe: Allocate doorbells for UMD exec queues Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 05/29] drm/xe: Add doorbell ID to snapshot capture Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 06/29] drm/xe: Break submission ring out into its own BO Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 07/29] drm/xe: Break indirect ring state " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 08/29] drm/xe: Clear GGTT in xe_bo_restore_kernel Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 09/29] FIXME: drm/xe: Add pad to ring and indirect state Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 10/29] drm/xe: Enable indirect ring on media GT Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 11/29] drm/xe: Don't add pinned mappings to VM bulk move Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 12/29] drm/xe: Add exec queue post init extension processing Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier Matthew Brost
2024-11-19 10:00 ` Christian König
2024-11-19 11:57 ` Joonas Lahtinen
2024-11-19 12:42 ` Mrozek, Michal
2024-12-18 12:59 ` Upadhyay, Tejas
2024-11-18 23:37 ` [RFC PATCH 14/29] drm/xe: Add support for mmapping doorbells to user space Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 15/29] drm/xe: Add support for mmapping submission ring and indirect ring state " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 16/29] drm/xe/uapi: Define UMD exec queue mapping uAPI Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 17/29] drm/xe: Add usermap exec queue extension Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 18/29] drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 19/29] drm/xe: Do not allow usermap exec queues in exec IOCTL Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 20/29] drm/xe: Teach GuC backend to kill usermap queues Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 21/29] drm/xe: Enable preempt fences on " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 22/29] drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 23/29] drm/xe: Add user fence IRQ handler Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 24/29] drm/xe: Add xe_hw_fence_user_init Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 25/29] drm/xe: Add a message lock to the Xe GPU scheduler Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 26/29] drm/xe: Always wait on preempt fences in vma_check_userptr Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 27/29] drm/xe: Teach xe_sync layer about drm_xe_semaphore Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 28/29] drm/xe: Add VM convert fence IOCTL Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 29/29] drm/xe: Add user fence TDR Matthew Brost
2024-11-18 23:55 ` ✓ CI.Patch_applied: success for UMD direct submission in Xe Patchwork
2024-11-18 23:56 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-18 23:57 ` ✓ CI.KUnit: success " Patchwork
2024-11-19 0:15 ` ✓ CI.Build: " Patchwork
2024-11-19 0:17 ` ✗ CI.Hooks: failure " Patchwork
2024-11-19 0:19 ` ✓ CI.checksparse: success " Patchwork
2024-11-19 0:39 ` ✗ CI.BAT: failure " Patchwork
2024-11-19 11:44 ` ✗ CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01678a48-828a-400f-b989-51c497845340@amd.com \
--to=christian.koenig@amd.com \
--cc=airlied@gmail.com \
--cc=boris.brezillon@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jose.souza@intel.com \
--cc=kenneth.w.graunke@intel.com \
--cc=lionel.g.landwerlin@intel.com \
--cc=matthew.brost@intel.com \
--cc=mihail.atanassov@arm.com \
--cc=shashank.sharma@amd.com \
--cc=simona.vetter@ffwll.ch \
--cc=steven.price@arm.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox