From: Matthew Brost <matthew.brost@intel.com>
To: "Christian König" <christian.koenig@amd.com>
Cc: <intel-xe@lists.freedesktop.org>,
<dri-devel@lists.freedesktop.org>, <kenneth.w.graunke@intel.com>,
<lionel.g.landwerlin@intel.com>, <jose.souza@intel.com>,
<simona.vetter@ffwll.ch>, <thomas.hellstrom@linux.intel.com>,
<boris.brezillon@collabora.com>, <airlied@gmail.com>,
<mihail.atanassov@arm.com>, <steven.price@arm.com>,
<shashank.sharma@amd.com>
Subject: Re: [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class
Date: Wed, 20 Nov 2024 09:36:53 -0800 [thread overview]
Message-ID: <Zz4eNeYuHulccROH@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <431e53fd-ab98-427c-a6ed-6c2907438952@amd.com>
On Wed, Nov 20, 2024 at 02:31:50PM +0100, Christian König wrote:
> Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > Add a dma_fence_preempt base class with driver ops to implement
> > preemption, based on the existing Xe preemptive fence implementation.
> >
> > Annotated to ensure correct driver usage.
> >
> > Cc: Dave Airlie <airlied@redhat.com>
> > Cc: Simona Vetter <simona.vetter@ffwll.ch>
> > Cc: Christian Koenig <christian.koenig@amd.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > ---
> > drivers/dma-buf/Makefile | 2 +-
> > drivers/dma-buf/dma-fence-preempt.c | 133 ++++++++++++++++++++++++++++
> > include/linux/dma-fence-preempt.h | 56 ++++++++++++
> > 3 files changed, 190 insertions(+), 1 deletion(-)
> > create mode 100644 drivers/dma-buf/dma-fence-preempt.c
> > create mode 100644 include/linux/dma-fence-preempt.h
> >
> > diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> > index 70ec901edf2c..c25500bb38b5 100644
> > --- a/drivers/dma-buf/Makefile
> > +++ b/drivers/dma-buf/Makefile
> > @@ -1,6 +1,6 @@
> > # SPDX-License-Identifier: GPL-2.0-only
> > obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> > - dma-fence-unwrap.o dma-resv.o
> > + dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> > obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> > obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> > obj-$(CONFIG_SYNC_FILE) += sync_file.o
> > diff --git a/drivers/dma-buf/dma-fence-preempt.c b/drivers/dma-buf/dma-fence-preempt.c
> > new file mode 100644
> > index 000000000000..6e6ce7ea7421
> > --- /dev/null
> > +++ b/drivers/dma-buf/dma-fence-preempt.c
> > @@ -0,0 +1,133 @@
> > +// SPDX-License-Identifier: MIT
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#include <linux/dma-fence-preempt.h>
> > +#include <linux/dma-resv.h>
> > +
> > +static void dma_fence_preempt_work_func(struct work_struct *w)
> > +{
> > + bool cookie = dma_fence_begin_signalling();
> > + struct dma_fence_preempt *pfence =
> > + container_of(w, typeof(*pfence), work);
> > + const struct dma_fence_preempt_ops *ops = pfence->ops;
> > + int err = pfence->base.error;
> > +
> > + if (!err) {
> > + err = ops->preempt_wait(pfence);
> > + if (err)
> > + dma_fence_set_error(&pfence->base, err);
> > + }
> > +
> > + dma_fence_signal(&pfence->base);
> > + ops->preempt_finished(pfence);
>
> Why is that callback useful?
>
In Xe, this is where we kick the resume worker and drop a ref to the
preemption object, which in Xe is an individual queue, and in AMD it is
a VM, right? wrt preemption object, I've reasoned this should work for
an either per queue or VM driver design of preempt fences.
This part likely could be moved into the preempt_wait callback though
but would get a little goofy in the error case if preempt_wait is not
called as the driver side would still need to cleanup a ref. Maybe I
don't even need a ref though - have to think that through - but for
general safety we typically like to take refs whenever a fence
references a different object.
> > +
> > + dma_fence_end_signalling(cookie);
> > +}
> > +
> > +static const char *
> > +dma_fence_preempt_get_driver_name(struct dma_fence *fence)
> > +{
> > + return "dma_fence_preempt";
> > +}
> > +
> > +static const char *
> > +dma_fence_preempt_get_timeline_name(struct dma_fence *fence)
> > +{
> > + return "ordered";
> > +}
> > +
> > +static void dma_fence_preempt_issue(struct dma_fence_preempt *pfence)
> > +{
> > + int err;
> > +
> > + err = pfence->ops->preempt(pfence);
> > + if (err)
> > + dma_fence_set_error(&pfence->base, err);
> > +
> > + queue_work(pfence->wq, &pfence->work);
> > +}
> > +
> > +static void dma_fence_preempt_cb(struct dma_fence *fence,
> > + struct dma_fence_cb *cb)
> > +{
> > + struct dma_fence_preempt *pfence =
> > + container_of(cb, typeof(*pfence), cb);
> > +
> > + dma_fence_preempt_issue(pfence);
> > +}
> > +
> > +static void dma_fence_preempt_delay(struct dma_fence_preempt *pfence)
> > +{
> > + struct dma_fence *fence;
> > + int err;
> > +
> > + fence = pfence->ops->preempt_delay(pfence);
>
> Mhm, why is that useful?
>
This for attaching the preempt object's last exported fence which needs
to be signaled before the preemption is issued. So for purely long
running VM's, this function could be NULL. For VM's with user queues +
dma fences, the driver returns the last fence from the convert user
fence to dma-fence IOCTL.
I realized my kernel doc doesn't explain this as well as it should, I
have already made this more verbose locally and hopefully it clearly
explains all of this.
> > + if (WARN_ON_ONCE(!fence || IS_ERR(fence)))
> > + return;
> > +
> > + err = dma_fence_add_callback(fence, &pfence->cb, dma_fence_preempt_cb);
>
> You are running into the exactly same bug we had :)
>
> The problem here is that you can't call dma_fence_add_callback() from the
> enable_signaling callback. Background is that the
> fence_ops->enable_signaling callback is called with the spinlock of the
> preemption fence held.
>
> This spinlock can be the same as the one of the user fence, but it could
> also be a different one. Either way calling dma_fence_add_callback() would
> let lockdep print a nice warning.
>
Hmm, I see the problem if you share a lock between the preempt fence and
last exported fence but as long as these locks are seperate I don't see
the issue.
The locking order then is:
preempt fence lock -> last exported fence lock.
Lockdep does not explode in Xe but maybe can buy this is a little
unsafe. We could always move preempt_delay to the worker, attach a CB,
and rekick the worker upon the last fence signaling if you think that is
safer. Of course we could always just directly wait on the returned last
fence in the worker too.
> I tried to solve this by changing the dma_fence code to not call
> enable_signaling with the lock held, we wanted to do that anyway to prevent
> a bunch of issues with driver unload. But I realized that getting this
> upstream would take to long.
>
> Long story short we moved handling the user fence into the work item.
>
I did run into an issue when trying to make preempt_wait after return a
fence + attach a CB, and signal this preempt fence from the CB. I got
locking inversions almost worked through them but eventually gave up and
stuck with the worker.
Matt
> Apart from that looks rather solid to me.
>
> Regards,
> Christian.
>
> > + if (err == -ENOENT)
> > + dma_fence_preempt_issue(pfence);
> > +}
> > +
> > +static bool dma_fence_preempt_enable_signaling(struct dma_fence *fence)
> > +{
> > + struct dma_fence_preempt *pfence =
> > + container_of(fence, typeof(*pfence), base);
> > +
> > + if (pfence->ops->preempt_delay)
> > + dma_fence_preempt_delay(pfence);
> > + else
> > + dma_fence_preempt_issue(pfence);
> > +
> > + return true;
> > +}
> > +
> > +static const struct dma_fence_ops preempt_fence_ops = {
> > + .get_driver_name = dma_fence_preempt_get_driver_name,
> > + .get_timeline_name = dma_fence_preempt_get_timeline_name,
> > + .enable_signaling = dma_fence_preempt_enable_signaling,
> > +};
> > +
> > +/**
> > + * dma_fence_is_preempt() - Is preempt fence
> > + *
> > + * @fence: Preempt fence
> > + *
> > + * Return: True if preempt fence, False otherwise
> > + */
> > +bool dma_fence_is_preempt(const struct dma_fence *fence)
> > +{
> > + return fence->ops == &preempt_fence_ops;
> > +}
> > +EXPORT_SYMBOL(dma_fence_is_preempt);
> > +
> > +/**
> > + * dma_fence_preempt_init() - Initial preempt fence
> > + *
> > + * @fence: Preempt fence
> > + * @ops: Preempt fence operations
> > + * @wq: Work queue for preempt wait, should have WQ_MEM_RECLAIM set
> > + * @context: Fence context
> > + * @seqno: Fence seqence number
> > + */
> > +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> > + const struct dma_fence_preempt_ops *ops,
> > + struct workqueue_struct *wq,
> > + u64 context, u64 seqno)
> > +{
> > + /*
> > + * XXX: We really want to check wq for WQ_MEM_RECLAIM here but
> > + * workqueue_struct is private.
> > + */
> > +
> > + fence->ops = ops;
> > + fence->wq = wq;
> > + INIT_WORK(&fence->work, dma_fence_preempt_work_func);
> > + spin_lock_init(&fence->lock);
> > + dma_fence_init(&fence->base, &preempt_fence_ops,
> > + &fence->lock, context, seqno);
> > +}
> > +EXPORT_SYMBOL(dma_fence_preempt_init);
> > diff --git a/include/linux/dma-fence-preempt.h b/include/linux/dma-fence-preempt.h
> > new file mode 100644
> > index 000000000000..28d803f89527
> > --- /dev/null
> > +++ b/include/linux/dma-fence-preempt.h
> > @@ -0,0 +1,56 @@
> > +/* SPDX-License-Identifier: MIT */
> > +/*
> > + * Copyright © 2024 Intel Corporation
> > + */
> > +
> > +#ifndef __LINUX_DMA_FENCE_PREEMPT_H
> > +#define __LINUX_DMA_FENCE_PREEMPT_H
> > +
> > +#include <linux/dma-fence.h>
> > +#include <linux/workqueue.h>
> > +
> > +struct dma_fence_preempt;
> > +struct dma_resv;
> > +
> > +/**
> > + * struct dma_fence_preempt_ops - Preempt fence operations
> > + *
> > + * These functions should be implemented in the driver side.
> > + */
> > +struct dma_fence_preempt_ops {
> > + /** @preempt_delay: Preempt execution with a delay */
> > + struct dma_fence *(*preempt_delay)(struct dma_fence_preempt *fence);
> > + /** @preempt: Preempt execution */
> > + int (*preempt)(struct dma_fence_preempt *fence);
> > + /** @preempt_wait: Wait for preempt of execution to complete */
> > + int (*preempt_wait)(struct dma_fence_preempt *fence);
> > + /** @preempt_finished: Signal that the preempt has finished */
> > + void (*preempt_finished)(struct dma_fence_preempt *fence);
> > +};
> > +
> > +/**
> > + * struct dma_fence_preempt - Embedded preempt fence base class
> > + */
> > +struct dma_fence_preempt {
> > + /** @base: Fence base class */
> > + struct dma_fence base;
> > + /** @lock: Spinlock for fence handling */
> > + spinlock_t lock;
> > + /** @cb: Callback preempt delay */
> > + struct dma_fence_cb cb;
> > + /** @ops: Preempt fence operation */
> > + const struct dma_fence_preempt_ops *ops;
> > + /** @wq: Work queue for preempt wait */
> > + struct workqueue_struct *wq;
> > + /** @work: Work struct for preempt wait */
> > + struct work_struct work;
> > +};
> > +
> > +bool dma_fence_is_preempt(const struct dma_fence *fence);
> > +
> > +void dma_fence_preempt_init(struct dma_fence_preempt *fence,
> > + const struct dma_fence_preempt_ops *ops,
> > + struct workqueue_struct *wq,
> > + u64 context, u64 seqno);
> > +
> > +#endif
>
next prev parent reply other threads:[~2024-11-20 17:36 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-18 23:37 [RFC PATCH 00/29] UMD direct submission in Xe Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 01/29] dma-fence: Add dma_fence_preempt base class Matthew Brost
2024-11-20 13:31 ` Christian König
2024-11-20 17:36 ` Matthew Brost [this message]
2024-11-21 10:04 ` Christian König
2024-11-21 18:41 ` Matthew Brost
2024-11-22 10:56 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence Matthew Brost
2024-11-20 13:38 ` Christian König
2024-11-20 22:50 ` Matthew Brost
2024-11-21 9:31 ` Christian König
2024-11-22 2:35 ` Matthew Brost
2024-11-22 10:28 ` Christian König
2024-11-18 23:37 ` [RFC PATCH 03/29] drm/xe: Use dma_fence_preempt base class Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 04/29] drm/xe: Allocate doorbells for UMD exec queues Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 05/29] drm/xe: Add doorbell ID to snapshot capture Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 06/29] drm/xe: Break submission ring out into its own BO Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 07/29] drm/xe: Break indirect ring state " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 08/29] drm/xe: Clear GGTT in xe_bo_restore_kernel Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 09/29] FIXME: drm/xe: Add pad to ring and indirect state Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 10/29] drm/xe: Enable indirect ring on media GT Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 11/29] drm/xe: Don't add pinned mappings to VM bulk move Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 12/29] drm/xe: Add exec queue post init extension processing Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 13/29] drm/xe/mmap: Add mmap support for PCI memory barrier Matthew Brost
2024-11-19 10:00 ` Christian König
2024-11-19 11:57 ` Joonas Lahtinen
2024-11-19 12:42 ` Mrozek, Michal
2024-12-18 12:59 ` Upadhyay, Tejas
2024-11-18 23:37 ` [RFC PATCH 14/29] drm/xe: Add support for mmapping doorbells to user space Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 15/29] drm/xe: Add support for mmapping submission ring and indirect ring state " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 16/29] drm/xe/uapi: Define UMD exec queue mapping uAPI Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 17/29] drm/xe: Add usermap exec queue extension Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 18/29] drm/xe: Drop EXEC_QUEUE_FLAG_UMD_SUBMISSION flag Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 19/29] drm/xe: Do not allow usermap exec queues in exec IOCTL Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 20/29] drm/xe: Teach GuC backend to kill usermap queues Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 21/29] drm/xe: Enable preempt fences on " Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 22/29] drm/xe/uapi: Add uAPI to convert user semaphore to / from drm syncobj Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 23/29] drm/xe: Add user fence IRQ handler Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 24/29] drm/xe: Add xe_hw_fence_user_init Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 25/29] drm/xe: Add a message lock to the Xe GPU scheduler Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 26/29] drm/xe: Always wait on preempt fences in vma_check_userptr Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 27/29] drm/xe: Teach xe_sync layer about drm_xe_semaphore Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 28/29] drm/xe: Add VM convert fence IOCTL Matthew Brost
2024-11-18 23:37 ` [RFC PATCH 29/29] drm/xe: Add user fence TDR Matthew Brost
2024-11-18 23:55 ` ✓ CI.Patch_applied: success for UMD direct submission in Xe Patchwork
2024-11-18 23:56 ` ✗ CI.checkpatch: warning " Patchwork
2024-11-18 23:57 ` ✓ CI.KUnit: success " Patchwork
2024-11-19 0:15 ` ✓ CI.Build: " Patchwork
2024-11-19 0:17 ` ✗ CI.Hooks: failure " Patchwork
2024-11-19 0:19 ` ✓ CI.checksparse: success " Patchwork
2024-11-19 0:39 ` ✗ CI.BAT: failure " Patchwork
2024-11-19 11:44 ` ✗ CI.FULL: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zz4eNeYuHulccROH@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=airlied@gmail.com \
--cc=boris.brezillon@collabora.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jose.souza@intel.com \
--cc=kenneth.w.graunke@intel.com \
--cc=lionel.g.landwerlin@intel.com \
--cc=mihail.atanassov@arm.com \
--cc=shashank.sharma@amd.com \
--cc=simona.vetter@ffwll.ch \
--cc=steven.price@arm.com \
--cc=thomas.hellstrom@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox