Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Ville Syrjälä" <ville.syrjala@linux.intel.com>
To: "Jouni Högander" <jouni.hogander@intel.com>
Cc: jani.nikula@intel.com, rodrigo.vivi@kernel.org,
	intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [RFC PATCH v2 22/23] drm/i915: Handle dma fences in dirtyfb callback
Date: Thu, 13 Jul 2023 23:08:04 +0300	[thread overview]
Message-ID: <ZLBZpHgWf1eH_RGV@intel.com> (raw)
In-Reply-To: <20230510121152.736148-23-jouni.hogander@intel.com>

On Wed, May 10, 2023 at 03:11:51PM +0300, Jouni Högander wrote:
> Take into account dma fences in dirtyfb callback. If there is no
> unsignaled dma fences perform flush immediately. If there are
> unsignaled dma fences perform invalidate and add callback which will
> queue flush when the fence gets signaled.
> 
> Signed-off-by: Jouni Högander <jouni.hogander@intel.com>
> ---
>  drivers/gpu/drm/i915/display/intel_fb.c | 55 +++++++++++++++++++++++--
>  1 file changed, 52 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c
> index fa4464d433b7..fc325f2299a4 100644
> --- a/drivers/gpu/drm/i915/display/intel_fb.c
> +++ b/drivers/gpu/drm/i915/display/intel_fb.c
> @@ -8,6 +8,9 @@
>  #include <drm/drm_framebuffer.h>
>  #include <drm/drm_modeset_helper.h>
>  
> +#include <linux/dma-fence.h>
> +#include <linux/dma-resv.h>
> +
>  #include "i915_drv.h"
>  #include "intel_display.h"
>  #include "intel_display_types.h"
> @@ -1888,6 +1891,20 @@ static int intel_user_framebuffer_create_handle(struct drm_framebuffer *fb,
>  }
>  
>  #ifdef I915
> +struct frontbuffer_fence_cb {
> +	struct dma_fence_cb base;
> +	struct intel_frontbuffer *front;
> +};
> +
> +static void intel_user_framebuffer_fence_wake(struct dma_fence *dma,
> +					      struct dma_fence_cb *data)
> +{
> +	struct frontbuffer_fence_cb *cb = container_of(data, typeof(*cb), base);
> +
> +	intel_frontbuffer_queue_flush(cb->front);
> +	kfree(cb);
> +}
> +
>  static int intel_user_framebuffer_dirty(struct drm_framebuffer *fb,
>  					struct drm_file *file,
>  					unsigned int flags, unsigned int color,
> @@ -1895,11 +1912,43 @@ static int intel_user_framebuffer_dirty(struct drm_framebuffer *fb,
>  					unsigned int num_clips)
>  {
>  	struct drm_i915_gem_object *obj = intel_fb_obj(fb);
> +	struct intel_frontbuffer *front = to_intel_frontbuffer(fb);
> +	struct dma_resv_iter cursor;
> +	struct dma_fence *fence;
> +	int ret;
> +
> +	if (dma_resv_test_signaled(intel_bo_to_drm_bo(obj).resv, dma_resv_usage_rw(false))) {
> +		intel_bo_flush_if_display(obj);
> +		intel_frontbuffer_flush(front, ORIGIN_DIRTYFB);
> +		return 0;
> +	}
>  
> -	intel_bo_flush_if_display(obj);
> -	intel_frontbuffer_flush(to_intel_frontbuffer(fb), ORIGIN_DIRTYFB);
> +	intel_frontbuffer_invalidate(front, ORIGIN_DIRTYFB);
>  
> -	return 0;
> +	dma_resv_iter_begin(&cursor, intel_bo_to_drm_bo(obj).resv,
> +			    dma_resv_usage_rw(false));
> +	dma_resv_for_each_fence_unlocked(&cursor, fence) {
> +		struct frontbuffer_fence_cb *cb =
> +			kmalloc(sizeof(struct frontbuffer_fence_cb), GFP_KERNEL);
> +		if (!cb) {
> +			ret = -ENOMEM;
> +			break;
> +		}
> +		cb->front = front;
> +
> +		ret = dma_fence_add_callback(fence, &cb->base,
> +					     intel_user_framebuffer_fence_wake);
> +		if (ret) {
> +			intel_user_framebuffer_fence_wake(fence, &cb->base);
> +			if (ret == -ENOENT)
> +				ret = 0;
> +			else
> +				break;
> +		}
> +	}
> +	dma_resv_iter_end(&cursor);

AFAICS we could use dma_resv_get_singleton() here to get just a
single callback once all the included fences have signalled. It
might also reduce the amount of kmallocs() a bit, though
dma_resv_get_singleton() does seem to end up doing multiple
allocations as well, but perhaps it could be optimized further.

The other thing dma_resv_get_singleton() does is is reference
counting of the fences. But I'm not sure that's needed here.
Ie. I'm not sure what the lifetime rules are.


I was also pondering what kind of scenarios we might hit here that might
be a bit problematic. This is what I came up with:

* scenario 1:

 flip(PLANE A):
  -> FB A.bits=PLANE A
 set fence(FB A):
  -> FB A.fence = fence 1
 dirtyfb(FB A):
  -> fence 1 !signalled -> invalidate FB A.bits==PLANE A
  -> fence 1 queue cb
 flip(PLANE A):
  -> FB A.bits = 0
  -> FB B.bits = PLANE A
 fence 1 cb -> flush FB A.bits=0

 In the end tracking is left in invalidated state, at least for
 FBC AFAICS. Possible fix would be to clear FBC busy_bits on flip [1]?
 DRRS is fine I think since every flip already clears busy_bits.
 Not sure what PSR does.


[1]
@@ -1299,11 +1299,9 @@ static void __intel_fbc_post_update(struct intel_fbc *fbc)
        lockdep_assert_held(&fbc->lock);

        fbc->flip_pending = false;
+       fbc->busy_bits = 0;

-       if (!fbc->busy_bits)
-               intel_fbc_activate(fbc);
-       else
-               intel_fbc_deactivate(fbc, "frontbuffer write");
+       intel_fbc_activate(fbc);
 }


* scenario 2:

 flip(PLANE A):
  -> FB A.bits=PLANE A
 set fence(FB A):
  -> FB A.fence = fence 1
 dirtyfb(FB A):
  -> fence 1 !signalled -> invalidate FB A.bits==PLANE A
  -> fence 1 queue cb
 set fence(FB A):
  -> FB A.fence = fence 2
 dirtyfb(FB A):
  -> fence 2 !signalled -> invalidate FB A.bits==PLANE A
  -> fence 2 queue cb
 fence 1 cb -> flush FB A.bits==PLANE A
  -> frontbuffer tracking flushed before fence 2 has signalled
 ...
 fence 2 cb -> flush FB A.bits==PLANE A

 Perhaps we should keep track of how many fences are actually pending,
 and only do the frontbuffer flush when the count drops to zero?
 OTOH the final flush should still guarantee some kind of correctness
 in the end, so not sure this is really a big problem.

> +
> +	return ret;
>  }
>  #endif
>  
> -- 
> 2.34.1

-- 
Ville Syrjälä
Intel

  reply	other threads:[~2023-07-13 20:08 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-10 12:11 [Intel-xe] [RFC PATCH v2 00/23] Xe frontbuffer tracking Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 01/23] fixup! drm/i915/display: Remaining changes to make xe compile Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 02/23] Revert "drm/i915/display: Neuter frontbuffer tracking harder" Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 03/23] Revert "drm/i915: Remove gem and overlay frontbuffer tracking" Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 04/23] fixup! drm/i915/display: Remaining changes to make xe compile Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 05/23] fixup! drm/xe/display: Implement display support Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 06/23] drm/i915: Add macros to get i915 device from i915_gem_object Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 07/23] drm/xe: Add macro to get i915 device from xe_bo Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 08/23] drm/i915: Add getter for i915_gem_object->frontbuffer Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 09/23] drm/xe: Add frontbuffer setter/getter for xe_bo Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 10/23] drm/i915/display: Remove i915_gem_object_types.h from intel_frontbuffer.h Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 11/23] drm/xe: Add intel_bo_flush_if_display define for Xe Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 12/23] drm/i915: Add intel_bo_flush_if_display define for i915 Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 13/23] drm/xe: Add struct i915_active for Xe Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 14/23] drm/xe: Add i915_active.h compatibility header Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 15/23] drm/xe/display: Include i916_active header Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 16/23] drm/i915: Add function to clear scanout flag for vmas Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 17/23] drm/xe: Add empty define for i915_ggtt_clear_scanout Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 18/23] drm/i915/display: Use i915_ggtt_clear_scanout Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 19/23] drm/i915/display: Use drm_gem_object_get/put Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 20/23] drm/xe/display: Use frontbuffer tracking for Xe as well Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 21/23] drm/i915: Add new frontbuffer tracking interface to queue flush Jouni Högander
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 22/23] drm/i915: Handle dma fences in dirtyfb callback Jouni Högander
2023-07-13 20:08   ` Ville Syrjälä [this message]
2023-07-27  5:44     ` Hogander, Jouni
2023-05-10 12:11 ` [Intel-xe] [RFC PATCH v2 23/23] drm/xe/display: Use custom dirtyfb for Xe as well Jouni Högander
2023-05-10 12:16 ` [Intel-xe] ✓ CI.Patch_applied: success for Xe frontbuffer tracking (rev2) Patchwork
2023-05-10 12:18 ` [Intel-xe] ✓ CI.KUnit: " Patchwork
2023-05-10 12:21 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-05-10 12:47 ` [Intel-xe] ○ CI.BAT: info " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZLBZpHgWf1eH_RGV@intel.com \
    --to=ville.syrjala@linux.intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jani.nikula@intel.com \
    --cc=jouni.hogander@intel.com \
    --cc=rodrigo.vivi@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox