From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: "Hogander, Jouni" <jouni.hogander@intel.com>
Cc: "Nikula, Jani" <jani.nikula@intel.com>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [Intel-xe] [RFC PATCH 0/3] Xe dma fence handling on atomic commit
Date: Thu, 28 Sep 2023 19:21:34 +0300 [thread overview]
Message-ID: <ZRWoDhvisTOYi-nK@intel.com> (raw)
In-Reply-To: <ZRWlht3cvt4WYCsu@intel.com>
On Thu, Sep 28, 2023 at 07:10:46PM +0300, Ville Syrjälä wrote:
> On Thu, Sep 28, 2023 at 08:23:41AM +0000, Hogander, Jouni wrote:
> > On Wed, 2023-09-27 at 14:35 +0200, Maarten Lankhorst wrote:
> > > Hey,
> > >
> > > On 2023-09-27 12:45, Hogander, Jouni wrote:
> > > > On Wed, 2023-09-27 at 12:33 +0200, Maarten Lankhorst wrote:
> > > > > Hey,
> > > > >
> > > > > When we wrote the original display support, we purposely decided
> > > > > on
> > > > > not
> > > > > adding i915_sw_fence support.
> > > > >
> > > > > In this case, I think a better approach would be to remove this
> > > > > code
> > > > > from i915 as well, and end up with cleaner display code for both
> > > > > drivers.
> > > >
> > > > Yes, I agree eventually this would be the goal. I did some
> > > > experiments
> > > > here:
> > > >
> > > > https://patchwork.freedesktop.org/patch/558982/?series=123898&rev=4
> > > >
> > > > Which is replacing i915_sw_fence with same code I'm using for Xe
> > > > driver
> > > > in this patch set. The problem is GPU reset detection. I don't have
> > > > currently good ideas how to tackle that without compromizing i915
> > > > functionality in this scenario -> ended up doing this only for Xe
> > > > to
> > > > ensure this is not blocking upstreaming Xe. Would this be
> > > > acceptable as
> > > > temporary solution to be solved after upstreaming? Anyways what I'm
> > > > doing in these patches is not really an i915_sw_fence revised, but
> > > > using dma_fences.
> > > In atomic, plane_state->fence is set for all fences, and that should
> > > be
> > > enough. I think creating a reservation object is a massive overkill,
> > > and we should try to use the drm_atomic_helper_wait_for_fences if at
> > > all
> > > possible.
> >
> > Checked this plane_state->fence. To me it looks like it's for userspace
> > to set? I don't think this is the case for i915. For Xe, I don't know
> > is it done?
> >
> > Also i915 is waiting for fences on old frame buffer objects and new fb
> > objects. Shouldn't we do the same for Xe as well?
>
> xe shouldn't need the old fb fence since userspace there won't be
> doing any MI_SCANLINE_WAIT stuff.
>
> > Anyways there is
> > currently no mechanism in dma_fence implementation to have common fence
> > for two separate dma_resv objects (dma_resv_get_singleton) -> I did
> > this by creating a new dma_resv for this purpose. If there is no reason
> > to wait for possible fences on old fb object, maybe we could just do
> > dma_resv_get_singleton in prepare planes for Xe and wait for that and
> > uapi.fence in commit?
>
> drm_gem_plane_helper_prepare_fb() supposedly does something that already
> works for xe. Looks like it does a few things:
> - populates plane_state->fence with dma_resv_get_singleton() if using
> implicit fencing
> - uses dma_fence_chain internally to add additional (kernel internal
> for memory movement/etc. maybe?) fences to the uapi supplied explicit fence
>
> Presumably we should be able to glue on the old fb fence using dma_fence_chain
> as well?
Oh and the display vs. GPU reset chicken/egg problem might be
the hard nut to crack. We need to get the pending display commits
flushed out before we can let a GPU reset clobber the display, but
the display commits are waiting on the fences and thus neither the
display commits nor the GPU reset can proceed, I think. Having to
wait for some kind of huge fence timeout in the display commit seems
a bit sub-optimal. Or would the fences get signalled when we detect
the GPU hang? If not, can we achieve something like that?
--
Ville Syrjälä
Intel
next prev parent reply other threads:[~2023-09-28 16:21 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-27 7:31 [Intel-xe] [RFC PATCH 0/3] Xe dma fence handling on atomic commit Jouni Högander
2023-09-27 7:31 ` [Intel-xe] [RFC PATCH 1/3] Revert "FIXME: drm/i915: fence stuff" Jouni Högander
2023-09-27 7:31 ` [Intel-xe] [RFC PATCH 2/3] drm/i915/display: Move fence completion wait away from display code Jouni Högander
2023-09-27 7:31 ` [Intel-xe] [RFC PATCH 3/3] fixup! drm/xe/display: Implement display support Jouni Högander
2023-09-27 7:34 ` [Intel-xe] ✓ CI.Patch_applied: success for Xe dma fence handling on atomic commit Patchwork
2023-09-27 7:34 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-09-27 7:35 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-09-27 7:42 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-09-27 7:43 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-09-27 7:44 ` [Intel-xe] ✗ CI.checksparse: warning " Patchwork
2023-09-27 8:16 ` [Intel-xe] ✓ CI.BAT: success " Patchwork
2023-09-27 10:33 ` [Intel-xe] [RFC PATCH 0/3] " Maarten Lankhorst
2023-09-27 10:45 ` Hogander, Jouni
2023-09-27 12:35 ` Maarten Lankhorst
2023-09-28 8:23 ` Hogander, Jouni
2023-09-28 16:10 ` Ville Syrjälä
2023-09-28 16:21 ` Ville Syrjälä [this message]
2023-10-16 11:22 ` Hogander, Jouni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZRWoDhvisTOYi-nK@intel.com \
--to=ville.syrjala@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
--cc=jouni.hogander@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox