From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
To: "Hogander, Jouni" <jouni.hogander@intel.com>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Cc: "Nikula, Jani" <jani.nikula@intel.com>
Subject: Re: [Intel-xe] [RFC PATCH 0/3] Xe dma fence handling on atomic commit
Date: Wed, 27 Sep 2023 14:35:00 +0200 [thread overview]
Message-ID: <85245068-4cce-8851-e527-29792385f78b@linux.intel.com> (raw)
In-Reply-To: <2fd37d47aacb192f9d4b4e8491743323536f364c.camel@intel.com>
Hey,
On 2023-09-27 12:45, Hogander, Jouni wrote:
> On Wed, 2023-09-27 at 12:33 +0200, Maarten Lankhorst wrote:
>> Hey,
>>
>> When we wrote the original display support, we purposely decided on
>> not
>> adding i915_sw_fence support.
>>
>> In this case, I think a better approach would be to remove this code
>> from i915 as well, and end up with cleaner display code for both
>> drivers.
>
> Yes, I agree eventually this would be the goal. I did some experiments
> here:
>
> https://patchwork.freedesktop.org/patch/558982/?series=123898&rev=4
>
> Which is replacing i915_sw_fence with same code I'm using for Xe driver
> in this patch set. The problem is GPU reset detection. I don't have
> currently good ideas how to tackle that without compromizing i915
> functionality in this scenario -> ended up doing this only for Xe to
> ensure this is not blocking upstreaming Xe. Would this be acceptable as
> temporary solution to be solved after upstreaming? Anyways what I'm
> doing in these patches is not really an i915_sw_fence revised, but
> using dma_fences.
In atomic, plane_state->fence is set for all fences, and that should be
enough. I think creating a reservation object is a massive overkill,
and we should try to use the drm_atomic_helper_wait_for_fences if at all
possible.
Cheers,
~Maarten
next prev parent reply other threads:[~2023-09-27 12:35 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-27 7:31 [Intel-xe] [RFC PATCH 0/3] Xe dma fence handling on atomic commit Jouni Högander
2023-09-27 7:31 ` [Intel-xe] [RFC PATCH 1/3] Revert "FIXME: drm/i915: fence stuff" Jouni Högander
2023-09-27 7:31 ` [Intel-xe] [RFC PATCH 2/3] drm/i915/display: Move fence completion wait away from display code Jouni Högander
2023-09-27 7:31 ` [Intel-xe] [RFC PATCH 3/3] fixup! drm/xe/display: Implement display support Jouni Högander
2023-09-27 7:34 ` [Intel-xe] ✓ CI.Patch_applied: success for Xe dma fence handling on atomic commit Patchwork
2023-09-27 7:34 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-09-27 7:35 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-09-27 7:42 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-09-27 7:43 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-09-27 7:44 ` [Intel-xe] ✗ CI.checksparse: warning " Patchwork
2023-09-27 8:16 ` [Intel-xe] ✓ CI.BAT: success " Patchwork
2023-09-27 10:33 ` [Intel-xe] [RFC PATCH 0/3] " Maarten Lankhorst
2023-09-27 10:45 ` Hogander, Jouni
2023-09-27 12:35 ` Maarten Lankhorst [this message]
2023-09-28 8:23 ` Hogander, Jouni
2023-09-28 16:10 ` Ville Syrjälä
2023-09-28 16:21 ` Ville Syrjälä
2023-10-16 11:22 ` Hogander, Jouni
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=85245068-4cce-8851-e527-29792385f78b@linux.intel.com \
--to=maarten.lankhorst@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=jani.nikula@intel.com \
--cc=jouni.hogander@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox