From: Daniel Vetter <daniel-/w4YWyX8dFk@public.gmane.org>
To: Chunming Zhou <david1.zhou-5C7GfCeVMHo@public.gmane.org>
Cc: "Christian König"
<ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org,
Christian.Koenig-5C7GfCeVMHo@public.gmane.org,
dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
Subject: Re: [Intel-gfx] [PATCH 03/10] drm/syncobj: add new drm_syncobj_add_point interface v2
Date: Wed, 12 Dec 2018 11:49:16 +0100 [thread overview]
Message-ID: <20181212104916.GV21184@phenom.ffwll.local> (raw)
In-Reply-To: <20181207155422.15967-3-david1.zhou-5C7GfCeVMHo@public.gmane.org>
On Fri, Dec 07, 2018 at 11:54:15PM +0800, Chunming Zhou wrote:
> From: Christian König <ckoenig.leichtzumerken@gmail.com>
>
> Use the dma_fence_chain object to create a timeline of fence objects
> instead of just replacing the existing fence.
>
> v2: rebase and cleanup
>
> Signed-off-by: Christian König <christian.koenig@amd.com>
Somewhat jumping back into this. Not sure we discussed this already or
not. I'm a bit unclear on why we have to chain the fences in the timeline:
- The timeline stuff is modelled after the WDDM2 monitored fences. Which
really are just u64 counters in memory somewhere (I think could be
system ram or vram). Because WDDM2 has the memory management entirely
separated from rendering synchronization it totally allows userspace to
create loops and deadlocks and everything else nasty using this - the
memory manager won't deadlock because these monitored fences never leak
into the buffer manager. And if CS deadlock, gpu reset takes care of the
mess.
- This has a few consequences, as in they seem to indeed work like a
memory location: Userspace incrementing out-of-order (because they run
batches updating the same fence on different engines) is totally fine,
as is doing anything else "stupid".
- Now on linux we can't allow anything, because we need to make sure that
deadlocks don't leak into the memory manager. But as long as we block
until the underlying dma_fence has materialized, nothing userspace can
do will lead to such a deadlock. Even if userspace ends up submitting
jobs without enough built-in synchronization, leading to out-of-order
signalling of fences on that "timeline". And I don't think that would
pose a problem for us.
Essentially I think we can look at timeline syncobj as a dma_fence
container indexed through an integer, and there's no need to enforce that
the timline works like a real dma_fence timeline, with all it's
guarantees. It's just a pile of (possibly, if userspace is stupid)
unrelated dma_fences. You could implement the entire thing in userspace
after all, except for the "we want to share these timeline objects between
processes" problem.
tldr; I think we can drop the dma_fence_chain complexity completely. Or at
least I'm not really understanding why it's needed.
Of course that means drivers cannot treat a drm_syncobj timeline as a
dma_fence timeline. But given the future fences stuff and all that, that's
already out of the window anyway.
What am I missing?
-Daniel
> ---
> drivers/gpu/drm/drm_syncobj.c | 37 +++++++++++++++++++++++++++++++++++
> include/drm/drm_syncobj.h | 5 +++++
> 2 files changed, 42 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_syncobj.c b/drivers/gpu/drm/drm_syncobj.c
> index e19525af0cce..51f798e2194f 100644
> --- a/drivers/gpu/drm/drm_syncobj.c
> +++ b/drivers/gpu/drm/drm_syncobj.c
> @@ -122,6 +122,43 @@ static void drm_syncobj_remove_wait(struct drm_syncobj *syncobj,
> spin_unlock(&syncobj->lock);
> }
>
> +/**
> + * drm_syncobj_add_point - add new timeline point to the syncobj
> + * @syncobj: sync object to add timeline point do
> + * @chain: chain node to use to add the point
> + * @fence: fence to encapsulate in the chain node
> + * @point: sequence number to use for the point
> + *
> + * Add the chain node as new timeline point to the syncobj.
> + */
> +void drm_syncobj_add_point(struct drm_syncobj *syncobj,
> + struct dma_fence_chain *chain,
> + struct dma_fence *fence,
> + uint64_t point)
> +{
> + struct syncobj_wait_entry *cur, *tmp;
> + struct dma_fence *prev;
> +
> + dma_fence_get(fence);
> +
> + spin_lock(&syncobj->lock);
> +
> + prev = rcu_dereference_protected(syncobj->fence,
> + lockdep_is_held(&syncobj->lock));
> + dma_fence_chain_init(chain, prev, fence, point);
> + rcu_assign_pointer(syncobj->fence, &chain->base);
> +
> + list_for_each_entry_safe(cur, tmp, &syncobj->cb_list, node) {
> + list_del_init(&cur->node);
> + syncobj_wait_syncobj_func(syncobj, cur);
> + }
> + spin_unlock(&syncobj->lock);
> +
> + /* Walk the chain once to trigger garbage collection */
> + dma_fence_chain_for_each(prev, fence);
> +}
> +EXPORT_SYMBOL(drm_syncobj_add_point);
> +
> /**
> * drm_syncobj_replace_fence - replace fence in a sync object.
> * @syncobj: Sync object to replace fence in
> diff --git a/include/drm/drm_syncobj.h b/include/drm/drm_syncobj.h
> index 7c6ed845c70d..8acb4ae4f311 100644
> --- a/include/drm/drm_syncobj.h
> +++ b/include/drm/drm_syncobj.h
> @@ -27,6 +27,7 @@
> #define __DRM_SYNCOBJ_H__
>
> #include "linux/dma-fence.h"
> +#include "linux/dma-fence-chain.h"
>
> /**
> * struct drm_syncobj - sync object.
> @@ -110,6 +111,10 @@ drm_syncobj_fence_get(struct drm_syncobj *syncobj)
>
> struct drm_syncobj *drm_syncobj_find(struct drm_file *file_private,
> u32 handle);
> +void drm_syncobj_add_point(struct drm_syncobj *syncobj,
> + struct dma_fence_chain *chain,
> + struct dma_fence *fence,
> + uint64_t point);
> void drm_syncobj_replace_fence(struct drm_syncobj *syncobj,
> struct dma_fence *fence);
> int drm_syncobj_find_fence(struct drm_file *file_private,
> --
> 2.17.1
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
next prev parent reply other threads:[~2018-12-12 10:49 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-12-07 15:54 [PATCH 01/10] dma-buf: add new dma_fence_chain container v4 Chunming Zhou
2018-12-07 15:54 ` [PATCH 03/10] drm/syncobj: add new drm_syncobj_add_point interface v2 Chunming Zhou
[not found] ` <20181207155422.15967-3-david1.zhou-5C7GfCeVMHo@public.gmane.org>
2018-12-12 10:49 ` Daniel Vetter [this message]
2018-12-12 11:08 ` [Intel-gfx] " Koenig, Christian
[not found] ` <12badb5a-f2c1-f819-c30a-f274d8a9401b-5C7GfCeVMHo@public.gmane.org>
2018-12-12 11:15 ` Daniel Vetter
[not found] ` <CAKMK7uEDuYmuYTbCr3fP-_bVKWehMMWn+SbJkEUAB_uWn6X1Gg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2018-12-12 11:39 ` Zhou, David(ChunMing)
[not found] ` <BY1PR12MB0502201CA782F5F2DBB661B8B4A70-PicGAnIBOobrCwm+z9iKNgdYzm3356FpvxpqHgZTriW3zl9H0oFU5g@public.gmane.org>
2018-12-12 12:00 ` Koenig, Christian
2018-12-12 12:20 ` Daniel Vetter
2018-12-12 12:24 ` Daniel Vetter
2018-12-12 13:06 ` [Intel-gfx] " Chunming Zhou
2018-12-07 15:54 ` [PATCH 05/10] drm/syncobj: add timeline payload query ioctl v4 Chunming Zhou
2018-12-07 15:54 ` [PATCH 06/10] drm/syncobj: use the timeline point in drm_syncobj_find_fence v3 Chunming Zhou
2018-12-07 15:54 ` [PATCH 07/10] drm/amdgpu: add timeline support in amdgpu CS v2 Chunming Zhou
[not found] ` <20181207155422.15967-1-david1.zhou-5C7GfCeVMHo@public.gmane.org>
2018-12-07 15:54 ` [PATCH 02/10] drm/syncobj: remove drm_syncobj_cb and cleanup Chunming Zhou
2018-12-07 15:54 ` [PATCH 04/10] drm/syncobj: add support for timeline point wait v8 Chunming Zhou
2018-12-07 15:54 ` [PATCH 08/10] drm/syncobj: add transition iotcls between binary and timeline v2 Chunming Zhou
2018-12-07 15:54 ` [PATCH 09/10] drm/syncobj: add timeline signal ioctl for syncobj v2 Chunming Zhou
2018-12-07 15:54 ` [PATCH 10/10] drm/amdgpu: update version for timeline syncobj support in amdgpu Chunming Zhou
2018-12-07 16:27 ` ✗ Fi.CI.CHECKPATCH: warning for series starting with [01/10] dma-buf: add new dma_fence_chain container v4 Patchwork
2018-12-07 16:33 ` ✗ Fi.CI.SPARSE: " Patchwork
2018-12-07 16:49 ` ✓ Fi.CI.BAT: success " Patchwork
2018-12-08 0:43 ` ✓ Fi.CI.IGT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181212104916.GV21184@phenom.ffwll.local \
--to=daniel-/w4ywyx8dfk@public.gmane.org \
--cc=Christian.Koenig-5C7GfCeVMHo@public.gmane.org \
--cc=amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \
--cc=ckoenig.leichtzumerken-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
--cc=david1.zhou-5C7GfCeVMHo@public.gmane.org \
--cc=dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \
--cc=intel-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox