From: Daniel Vetter <daniel@ffwll.ch>
To: "Christian König" <ckoenig.leichtzumerken@gmail.com>
Cc: Alex Deucher <Alexander.Deucher@amd.com>,
amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [PATCH 1/2] drm/amdgpu: unwrap fence chains in the explicit sync fence
Date: Thu, 17 Jun 2021 21:38:24 +0200 [thread overview]
Message-ID: <YMuksIKzYmgCZ2qS@phenom.ffwll.local> (raw)
In-Reply-To: <YMuGsGN/mxY+WU+q@phenom.ffwll.local>
On Thu, Jun 17, 2021 at 07:30:24PM +0200, Daniel Vetter wrote:
> On Thu, Jun 17, 2021 at 09:44:25AM +0200, Christian König wrote:
> > Alex do want to review those so that we can close the ticket?
>
> Maybe I'm behind on mails, but 2nd patch still has the issues I think I'm
> seeing ...
Ok with temperatures getting colder towards the night the 2nd patch looks
much better now :-) I replied there.
-Daniel
> -Daniel
>
> >
> > Thanks,
> > Christian.
> >
> > Am 14.06.21 um 19:45 schrieb Christian König:
> > > Unwrap the explicit fence if it is a dma_fence_chain and
> > > sync to the first fence not matching the owner rules.
> > >
> > > Signed-off-by: Christian König <christian.koenig@amd.com>
> > > Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > > ---
> > > drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c | 118 +++++++++++++----------
> > > 1 file changed, 68 insertions(+), 50 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> > > index 1b2ceccaf5b0..862eb3c1c4c5 100644
> > > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> > > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
> > > @@ -28,6 +28,8 @@
> > > * Christian König <christian.koenig@amd.com>
> > > */
> > > +#include <linux/dma-fence-chain.h>
> > > +
> > > #include "amdgpu.h"
> > > #include "amdgpu_trace.h"
> > > #include "amdgpu_amdkfd.h"
> > > @@ -186,6 +188,55 @@ int amdgpu_sync_vm_fence(struct amdgpu_sync *sync, struct dma_fence *fence)
> > > return amdgpu_sync_fence(sync, fence);
> > > }
> > > +/* Determine based on the owner and mode if we should sync to a fence or not */
> > > +static bool amdgpu_sync_test_fence(struct amdgpu_device *adev,
> > > + enum amdgpu_sync_mode mode,
> > > + void *owner, struct dma_fence *f)
> > > +{
> > > + void *fence_owner = amdgpu_sync_get_owner(f);
> > > +
> > > + /* Always sync to moves, no matter what */
> > > + if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED)
> > > + return true;
> > > +
> > > + /* We only want to trigger KFD eviction fences on
> > > + * evict or move jobs. Skip KFD fences otherwise.
> > > + */
> > > + if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
> > > + owner != AMDGPU_FENCE_OWNER_UNDEFINED)
> > > + return false;
> > > +
> > > + /* Never sync to VM updates either. */
> > > + if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
> > > + owner != AMDGPU_FENCE_OWNER_UNDEFINED)
> > > + return false;
> > > +
> > > + /* Ignore fences depending on the sync mode */
> > > + switch (mode) {
> > > + case AMDGPU_SYNC_ALWAYS:
> > > + return true;
> > > +
> > > + case AMDGPU_SYNC_NE_OWNER:
> > > + if (amdgpu_sync_same_dev(adev, f) &&
> > > + fence_owner == owner)
> > > + return false;
> > > + break;
> > > +
> > > + case AMDGPU_SYNC_EQ_OWNER:
> > > + if (amdgpu_sync_same_dev(adev, f) &&
> > > + fence_owner != owner)
> > > + return false;
> > > + break;
> > > +
> > > + case AMDGPU_SYNC_EXPLICIT:
> > > + return false;
> > > + }
> > > +
> > > + WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
> > > + "Adding eviction fence to sync obj");
> > > + return true;
> > > +}
> > > +
> > > /**
> > > * amdgpu_sync_resv - sync to a reservation object
> > > *
> > > @@ -211,67 +262,34 @@ int amdgpu_sync_resv(struct amdgpu_device *adev, struct amdgpu_sync *sync,
> > > /* always sync to the exclusive fence */
> > > f = dma_resv_excl_fence(resv);
> > > - r = amdgpu_sync_fence(sync, f);
> > > + dma_fence_chain_for_each(f, f) {
> > > + struct dma_fence_chain *chain = to_dma_fence_chain(f);
> > > +
> > > + if (amdgpu_sync_test_fence(adev, mode, owner, chain ?
> > > + chain->fence : f)) {
> > > + r = amdgpu_sync_fence(sync, f);
> > > + dma_fence_put(f);
> > > + if (r)
> > > + return r;
> > > + break;
> > > + }
> > > + }
> > > flist = dma_resv_shared_list(resv);
> > > - if (!flist || r)
> > > - return r;
> > > + if (!flist)
> > > + return 0;
> > > for (i = 0; i < flist->shared_count; ++i) {
> > > - void *fence_owner;
> > > -
> > > f = rcu_dereference_protected(flist->shared[i],
> > > dma_resv_held(resv));
> > > - fence_owner = amdgpu_sync_get_owner(f);
> > > -
> > > - /* Always sync to moves, no matter what */
> > > - if (fence_owner == AMDGPU_FENCE_OWNER_UNDEFINED) {
> > > + if (amdgpu_sync_test_fence(adev, mode, owner, f)) {
> > > r = amdgpu_sync_fence(sync, f);
> > > if (r)
> > > - break;
> > > - }
> > > -
> > > - /* We only want to trigger KFD eviction fences on
> > > - * evict or move jobs. Skip KFD fences otherwise.
> > > - */
> > > - if (fence_owner == AMDGPU_FENCE_OWNER_KFD &&
> > > - owner != AMDGPU_FENCE_OWNER_UNDEFINED)
> > > - continue;
> > > -
> > > - /* Never sync to VM updates either. */
> > > - if (fence_owner == AMDGPU_FENCE_OWNER_VM &&
> > > - owner != AMDGPU_FENCE_OWNER_UNDEFINED)
> > > - continue;
> > > -
> > > - /* Ignore fences depending on the sync mode */
> > > - switch (mode) {
> > > - case AMDGPU_SYNC_ALWAYS:
> > > - break;
> > > -
> > > - case AMDGPU_SYNC_NE_OWNER:
> > > - if (amdgpu_sync_same_dev(adev, f) &&
> > > - fence_owner == owner)
> > > - continue;
> > > - break;
> > > -
> > > - case AMDGPU_SYNC_EQ_OWNER:
> > > - if (amdgpu_sync_same_dev(adev, f) &&
> > > - fence_owner != owner)
> > > - continue;
> > > - break;
> > > -
> > > - case AMDGPU_SYNC_EXPLICIT:
> > > - continue;
> > > + return r;
> > > }
> > > -
> > > - WARN(debug_evictions && fence_owner == AMDGPU_FENCE_OWNER_KFD,
> > > - "Adding eviction fence to sync obj");
> > > - r = amdgpu_sync_fence(sync, f);
> > > - if (r)
> > > - break;
> > > }
> > > - return r;
> > > + return 0;
> > > }
> > > /**
> >
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx
prev parent reply other threads:[~2021-06-17 19:38 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-14 17:45 [PATCH 1/2] drm/amdgpu: unwrap fence chains in the explicit sync fence Christian König
2021-06-14 17:45 ` [PATCH 2/2] drm/amdgpu: rework dma_resv handling v3 Christian König
2021-06-17 19:37 ` Daniel Vetter
2021-06-17 21:09 ` Alex Deucher
2021-06-22 9:20 ` Christian König
2021-06-17 7:44 ` [PATCH 1/2] drm/amdgpu: unwrap fence chains in the explicit sync fence Christian König
2021-06-17 17:30 ` Daniel Vetter
2021-06-17 19:38 ` Daniel Vetter [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YMuksIKzYmgCZ2qS@phenom.ffwll.local \
--to=daniel@ffwll.ch \
--cc=Alexander.Deucher@amd.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=ckoenig.leichtzumerken@gmail.com \
--cc=dri-devel@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox