* [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb()
@ 2024-10-20 23:08 Dmitry Osipenko
2024-10-20 23:08 ` [PATCH v3 2/2] drm/virtio: New fence for every plane update Dmitry Osipenko
` (4 more replies)
0 siblings, 5 replies; 8+ messages in thread
From: Dmitry Osipenko @ 2024-10-20 23:08 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Rob Clark, Pierre-Eric Pelloux-Prayer, Kim Dongwon,
Kasireddy Vivek
Cc: dri-devel, virtualization, linux-kernel, kernel
From: Dongwon Kim <dongwon.kim@intel.com>
Use drm_gem_plane_helper_prepare_fb() helper for explicit framebuffer
synchronization. We need to wait for explicit fences in a case of
Venus and native contexts when guest user space uses explicit fencing.
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
[dmitry.osipenko@collabora.com>: Edit commit message]
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/virtio/virtgpu_plane.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a72a2dbda031..ab7232921cb7 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -26,6 +26,7 @@
#include <drm/drm_atomic_helper.h>
#include <drm/drm_damage_helper.h>
#include <drm/drm_fourcc.h>
+#include <drm/drm_gem_atomic_helper.h>
#include "virtgpu_drv.h"
@@ -254,6 +255,9 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
vgfb = to_virtio_gpu_framebuffer(new_state->fb);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+
+ drm_gem_plane_helper_prepare_fb(plane, new_state);
+
if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
return 0;
--
2.47.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 2/2] drm/virtio: New fence for every plane update
2024-10-20 23:08 [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Dmitry Osipenko
@ 2024-10-20 23:08 ` Dmitry Osipenko
2024-10-22 4:44 ` Kasireddy, Vivek
2024-10-22 4:41 ` [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Kasireddy, Vivek
` (3 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: Dmitry Osipenko @ 2024-10-20 23:08 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Rob Clark, Pierre-Eric Pelloux-Prayer, Kim Dongwon,
Kasireddy Vivek
Cc: dri-devel, virtualization, linux-kernel, kernel
From: Dongwon Kim <dongwon.kim@intel.com>
Having a fence linked to a virtio_gpu_framebuffer in the plane update
sequence would cause conflict when several planes referencing the same
framebuffer (e.g. Xorg screen covering multi-displays configured for an
extended mode) and those planes are updated concurrently. So it is needed
to allocate a fence for every plane state instead of the framebuffer.
Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
[dmitry.osipenko@collabora.com: rebase, fix up, edit commit message]
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
drivers/gpu/drm/virtio/virtgpu_drv.h | 7 ++++
drivers/gpu/drm/virtio/virtgpu_plane.c | 58 +++++++++++++++++---------
2 files changed, 46 insertions(+), 19 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 64c236169db8..5dc8eeaf7123 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -194,6 +194,13 @@ struct virtio_gpu_framebuffer {
#define to_virtio_gpu_framebuffer(x) \
container_of(x, struct virtio_gpu_framebuffer, base)
+struct virtio_gpu_plane_state {
+ struct drm_plane_state base;
+ struct virtio_gpu_fence *fence;
+};
+#define to_virtio_gpu_plane_state(x) \
+ container_of(x, struct virtio_gpu_plane_state, base)
+
struct virtio_gpu_queue {
struct virtqueue *vq;
spinlock_t qlock;
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
index ab7232921cb7..2add67c6b66d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -67,11 +67,28 @@ uint32_t virtio_gpu_translate_format(uint32_t drm_fourcc)
return format;
}
+static struct
+drm_plane_state *virtio_gpu_plane_duplicate_state(struct drm_plane *plane)
+{
+ struct virtio_gpu_plane_state *new;
+
+ if (WARN_ON(!plane->state))
+ return NULL;
+
+ new = kzalloc(sizeof(*new), GFP_KERNEL);
+ if (!new)
+ return NULL;
+
+ __drm_atomic_helper_plane_duplicate_state(plane, &new->base);
+
+ return &new->base;
+}
+
static const struct drm_plane_funcs virtio_gpu_plane_funcs = {
.update_plane = drm_atomic_helper_update_plane,
.disable_plane = drm_atomic_helper_disable_plane,
.reset = drm_atomic_helper_plane_reset,
- .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
+ .atomic_duplicate_state = virtio_gpu_plane_duplicate_state,
.atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
};
@@ -139,11 +156,13 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
struct drm_device *dev = plane->dev;
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_framebuffer *vgfb;
+ struct virtio_gpu_plane_state *vgplane_st;
struct virtio_gpu_object *bo;
vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+ vgplane_st = to_virtio_gpu_plane_state(plane->state);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
- if (vgfb->fence) {
+ if (vgplane_st->fence) {
struct virtio_gpu_object_array *objs;
objs = virtio_gpu_array_alloc(1);
@@ -152,13 +171,11 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
virtio_gpu_array_lock_resv(objs);
virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
- width, height, objs, vgfb->fence);
+ width, height, objs,
+ vgplane_st->fence);
virtio_gpu_notify(vgdev);
-
- dma_fence_wait_timeout(&vgfb->fence->f, true,
+ dma_fence_wait_timeout(&vgplane_st->fence->f, true,
msecs_to_jiffies(50));
- dma_fence_put(&vgfb->fence->f);
- vgfb->fence = NULL;
} else {
virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
width, height, NULL, NULL);
@@ -248,12 +265,14 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
struct drm_device *dev = plane->dev;
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_framebuffer *vgfb;
+ struct virtio_gpu_plane_state *vgplane_st;
struct virtio_gpu_object *bo;
if (!new_state->fb)
return 0;
vgfb = to_virtio_gpu_framebuffer(new_state->fb);
+ vgplane_st = to_virtio_gpu_plane_state(new_state);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
drm_gem_plane_helper_prepare_fb(plane, new_state);
@@ -261,10 +280,11 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
return 0;
- if (bo->dumb && (plane->state->fb != new_state->fb)) {
- vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
+ if (bo->dumb) {
+ vgplane_st->fence = virtio_gpu_fence_alloc(vgdev,
+ vgdev->fence_drv.context,
0);
- if (!vgfb->fence)
+ if (!vgplane_st->fence)
return -ENOMEM;
}
@@ -274,15 +294,15 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
struct drm_plane_state *state)
{
- struct virtio_gpu_framebuffer *vgfb;
+ struct virtio_gpu_plane_state *vgplane_st;
if (!state->fb)
return;
- vgfb = to_virtio_gpu_framebuffer(state->fb);
- if (vgfb->fence) {
- dma_fence_put(&vgfb->fence->f);
- vgfb->fence = NULL;
+ vgplane_st = to_virtio_gpu_plane_state(state);
+ if (vgplane_st->fence) {
+ dma_fence_put(&vgplane_st->fence->f);
+ vgplane_st->fence = NULL;
}
}
@@ -295,6 +315,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_output *output = NULL;
struct virtio_gpu_framebuffer *vgfb;
+ struct virtio_gpu_plane_state *vgplane_st;
struct virtio_gpu_object *bo = NULL;
uint32_t handle;
@@ -307,6 +328,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
if (plane->state->fb) {
vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
+ vgplane_st = to_virtio_gpu_plane_state(plane->state);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
handle = bo->hw_res_handle;
} else {
@@ -326,11 +348,9 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
(vgdev, 0,
plane->state->crtc_w,
plane->state->crtc_h,
- 0, 0, objs, vgfb->fence);
+ 0, 0, objs, vgplane_st->fence);
virtio_gpu_notify(vgdev);
- dma_fence_wait(&vgfb->fence->f, true);
- dma_fence_put(&vgfb->fence->f);
- vgfb->fence = NULL;
+ dma_fence_wait(&vgplane_st->fence->f, true);
}
if (plane->state->fb != old_state->fb) {
--
2.47.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* RE: [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb()
2024-10-20 23:08 [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Dmitry Osipenko
2024-10-20 23:08 ` [PATCH v3 2/2] drm/virtio: New fence for every plane update Dmitry Osipenko
@ 2024-10-22 4:41 ` Kasireddy, Vivek
2024-11-06 1:27 ` Rob Clark
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Kasireddy, Vivek @ 2024-10-22 4:41 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
Chia-I Wu, Rob Clark, Pierre-Eric Pelloux-Prayer, Kim, Dongwon
Cc: dri-devel@lists.freedesktop.org,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, kernel@collabora.com
> Subject: [PATCH v3 1/2] drm/virtio: Use
> drm_gem_plane_helper_prepare_fb()
>
> From: Dongwon Kim <dongwon.kim@intel.com>
>
> Use drm_gem_plane_helper_prepare_fb() helper for explicit framebuffer
> synchronization. We need to wait for explicit fences in a case of
> Venus and native contexts when guest user space uses explicit fencing.
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> [dmitry.osipenko@collabora.com>: Edit commit message]
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/virtio/virtgpu_plane.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c
> b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index a72a2dbda031..ab7232921cb7 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -26,6 +26,7 @@
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_damage_helper.h>
> #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>
> #include "virtgpu_drv.h"
>
> @@ -254,6 +255,9 @@ static int virtio_gpu_plane_prepare_fb(struct
> drm_plane *plane,
>
> vgfb = to_virtio_gpu_framebuffer(new_state->fb);
> bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> +
> + drm_gem_plane_helper_prepare_fb(plane, new_state);
> +
Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo-
> >guest_blob))
> return 0;
>
> --
> 2.47.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* RE: [PATCH v3 2/2] drm/virtio: New fence for every plane update
2024-10-20 23:08 ` [PATCH v3 2/2] drm/virtio: New fence for every plane update Dmitry Osipenko
@ 2024-10-22 4:44 ` Kasireddy, Vivek
2024-10-25 17:59 ` Dmitry Osipenko
0 siblings, 1 reply; 8+ messages in thread
From: Kasireddy, Vivek @ 2024-10-22 4:44 UTC (permalink / raw)
To: Dmitry Osipenko, David Airlie, Gerd Hoffmann, Gurchetan Singh,
Chia-I Wu, Rob Clark, Pierre-Eric Pelloux-Prayer, Kim, Dongwon
Cc: dri-devel@lists.freedesktop.org,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, kernel@collabora.com
Hi Dmitry,
> Subject: [PATCH v3 2/2] drm/virtio: New fence for every plane update
>
> From: Dongwon Kim <dongwon.kim@intel.com>
>
> Having a fence linked to a virtio_gpu_framebuffer in the plane update
> sequence would cause conflict when several planes referencing the same
> framebuffer (e.g. Xorg screen covering multi-displays configured for an
> extended mode) and those planes are updated concurrently. So it is needed
> to allocate a fence for every plane state instead of the framebuffer.
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> [dmitry.osipenko@collabora.com: rebase, fix up, edit commit message]
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/virtio/virtgpu_drv.h | 7 ++++
> drivers/gpu/drm/virtio/virtgpu_plane.c | 58 +++++++++++++++++---------
> 2 files changed, 46 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h
> b/drivers/gpu/drm/virtio/virtgpu_drv.h
> index 64c236169db8..5dc8eeaf7123 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_drv.h
> +++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
> @@ -194,6 +194,13 @@ struct virtio_gpu_framebuffer {
> #define to_virtio_gpu_framebuffer(x) \
> container_of(x, struct virtio_gpu_framebuffer, base)
>
> +struct virtio_gpu_plane_state {
> + struct drm_plane_state base;
> + struct virtio_gpu_fence *fence;
> +};
> +#define to_virtio_gpu_plane_state(x) \
> + container_of(x, struct virtio_gpu_plane_state, base)
> +
> struct virtio_gpu_queue {
> struct virtqueue *vq;
> spinlock_t qlock;
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c
> b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index ab7232921cb7..2add67c6b66d 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -67,11 +67,28 @@ uint32_t virtio_gpu_translate_format(uint32_t
> drm_fourcc)
> return format;
> }
>
> +static struct
> +drm_plane_state *virtio_gpu_plane_duplicate_state(struct drm_plane
> *plane)
> +{
> + struct virtio_gpu_plane_state *new;
> +
> + if (WARN_ON(!plane->state))
> + return NULL;
> +
> + new = kzalloc(sizeof(*new), GFP_KERNEL);
> + if (!new)
> + return NULL;
> +
> + __drm_atomic_helper_plane_duplicate_state(plane, &new->base);
> +
> + return &new->base;
> +}
> +
> static const struct drm_plane_funcs virtio_gpu_plane_funcs = {
> .update_plane = drm_atomic_helper_update_plane,
> .disable_plane = drm_atomic_helper_disable_plane,
> .reset = drm_atomic_helper_plane_reset,
> - .atomic_duplicate_state =
> drm_atomic_helper_plane_duplicate_state,
> + .atomic_duplicate_state = virtio_gpu_plane_duplicate_state,
> .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
> };
>
> @@ -139,11 +156,13 @@ static void virtio_gpu_resource_flush(struct
> drm_plane *plane,
> struct drm_device *dev = plane->dev;
> struct virtio_gpu_device *vgdev = dev->dev_private;
> struct virtio_gpu_framebuffer *vgfb;
> + struct virtio_gpu_plane_state *vgplane_st;
> struct virtio_gpu_object *bo;
>
> vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
> + vgplane_st = to_virtio_gpu_plane_state(plane->state);
> bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> - if (vgfb->fence) {
> + if (vgplane_st->fence) {
> struct virtio_gpu_object_array *objs;
>
> objs = virtio_gpu_array_alloc(1);
> @@ -152,13 +171,11 @@ static void virtio_gpu_resource_flush(struct
> drm_plane *plane,
> virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
> virtio_gpu_array_lock_resv(objs);
> virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
> x, y,
> - width, height, objs, vgfb->fence);
> + width, height, objs,
> + vgplane_st->fence);
> virtio_gpu_notify(vgdev);
> -
> - dma_fence_wait_timeout(&vgfb->fence->f, true,
> + dma_fence_wait_timeout(&vgplane_st->fence->f, true,
> msecs_to_jiffies(50));
> - dma_fence_put(&vgfb->fence->f);
> - vgfb->fence = NULL;
Not sure if it makes any difference but would there be a problem if you unref
the fence here (existing behavior) instead of deferring it until cleanup?
> } else {
> virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
> x, y,
> width, height, NULL, NULL);
> @@ -248,12 +265,14 @@ static int virtio_gpu_plane_prepare_fb(struct
> drm_plane *plane,
> struct drm_device *dev = plane->dev;
> struct virtio_gpu_device *vgdev = dev->dev_private;
> struct virtio_gpu_framebuffer *vgfb;
> + struct virtio_gpu_plane_state *vgplane_st;
> struct virtio_gpu_object *bo;
>
> if (!new_state->fb)
> return 0;
>
> vgfb = to_virtio_gpu_framebuffer(new_state->fb);
> + vgplane_st = to_virtio_gpu_plane_state(new_state);
> bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
>
> drm_gem_plane_helper_prepare_fb(plane, new_state);
> @@ -261,10 +280,11 @@ static int virtio_gpu_plane_prepare_fb(struct
> drm_plane *plane,
> if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo-
> >guest_blob))
> return 0;
>
> - if (bo->dumb && (plane->state->fb != new_state->fb)) {
> - vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev-
> >fence_drv.context,
> + if (bo->dumb) {
> + vgplane_st->fence = virtio_gpu_fence_alloc(vgdev,
> + vgdev->fence_drv.context,
> 0);
> - if (!vgfb->fence)
> + if (!vgplane_st->fence)
> return -ENOMEM;
> }
>
> @@ -274,15 +294,15 @@ static int virtio_gpu_plane_prepare_fb(struct
> drm_plane *plane,
> static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
> struct drm_plane_state *state)
> {
> - struct virtio_gpu_framebuffer *vgfb;
> + struct virtio_gpu_plane_state *vgplane_st;
>
> if (!state->fb)
> return;
>
> - vgfb = to_virtio_gpu_framebuffer(state->fb);
> - if (vgfb->fence) {
> - dma_fence_put(&vgfb->fence->f);
> - vgfb->fence = NULL;
> + vgplane_st = to_virtio_gpu_plane_state(state);
> + if (vgplane_st->fence) {
> + dma_fence_put(&vgplane_st->fence->f);
> + vgplane_st->fence = NULL;
> }
> }
>
> @@ -295,6 +315,7 @@ static void virtio_gpu_cursor_plane_update(struct
> drm_plane *plane,
> struct virtio_gpu_device *vgdev = dev->dev_private;
> struct virtio_gpu_output *output = NULL;
> struct virtio_gpu_framebuffer *vgfb;
> + struct virtio_gpu_plane_state *vgplane_st;
> struct virtio_gpu_object *bo = NULL;
> uint32_t handle;
>
> @@ -307,6 +328,7 @@ static void virtio_gpu_cursor_plane_update(struct
> drm_plane *plane,
>
> if (plane->state->fb) {
> vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
> + vgplane_st = to_virtio_gpu_plane_state(plane->state);
> bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> handle = bo->hw_res_handle;
> } else {
> @@ -326,11 +348,9 @@ static void virtio_gpu_cursor_plane_update(struct
> drm_plane *plane,
> (vgdev, 0,
> plane->state->crtc_w,
> plane->state->crtc_h,
> - 0, 0, objs, vgfb->fence);
> + 0, 0, objs, vgplane_st->fence);
> virtio_gpu_notify(vgdev);
> - dma_fence_wait(&vgfb->fence->f, true);
> - dma_fence_put(&vgfb->fence->f);
> - vgfb->fence = NULL;
Same comment as above.
Regardless, the patch LGTM.
Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
> + dma_fence_wait(&vgplane_st->fence->f, true);
> }
>
> if (plane->state->fb != old_state->fb) {
> --
> 2.47.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 2/2] drm/virtio: New fence for every plane update
2024-10-22 4:44 ` Kasireddy, Vivek
@ 2024-10-25 17:59 ` Dmitry Osipenko
0 siblings, 0 replies; 8+ messages in thread
From: Dmitry Osipenko @ 2024-10-25 17:59 UTC (permalink / raw)
To: Kasireddy, Vivek, David Airlie, Gerd Hoffmann, Gurchetan Singh,
Chia-I Wu, Rob Clark, Pierre-Eric Pelloux-Prayer, Kim, Dongwon
Cc: dri-devel@lists.freedesktop.org,
virtualization@lists.linux-foundation.org,
linux-kernel@vger.kernel.org, kernel@collabora.com
On 10/22/24 07:44, Kasireddy, Vivek wrote:
>> virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle,
>> x, y,
>> - width, height, objs, vgfb->fence);
>> + width, height, objs,
>> + vgplane_st->fence);
>> virtio_gpu_notify(vgdev);
>> -
>> - dma_fence_wait_timeout(&vgfb->fence->f, true,
>> + dma_fence_wait_timeout(&vgplane_st->fence->f, true,
>> msecs_to_jiffies(50));
>> - dma_fence_put(&vgfb->fence->f);
>> - vgfb->fence = NULL;
> Not sure if it makes any difference but would there be a problem if you unref
> the fence here (existing behavior) instead of deferring it until cleanup?
Previously fence was a part of virtio-gpu framebuffer, which was kind of
a hack. Maybe there was no better option back then, when this code was
written initially.
Now fence is a part of plane's atomic state, like it should be. We
shouldn't change atomic state at the commit time.
...
>> @@ -326,11 +348,9 @@ static void virtio_gpu_cursor_plane_update(struct
>> drm_plane *plane,
>> (vgdev, 0,
>> plane->state->crtc_w,
>> plane->state->crtc_h,
>> - 0, 0, objs, vgfb->fence);
>> + 0, 0, objs, vgplane_st->fence);
>> virtio_gpu_notify(vgdev);
>> - dma_fence_wait(&vgfb->fence->f, true);
>> - dma_fence_put(&vgfb->fence->f);
>> - vgfb->fence = NULL;
> Same comment as above.
> Regardless, the patch LGTM.
>
> Acked-by: Vivek Kasireddy <vivek.kasireddy@intel.com>
Thanks for the review :)
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb()
2024-10-20 23:08 [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Dmitry Osipenko
2024-10-20 23:08 ` [PATCH v3 2/2] drm/virtio: New fence for every plane update Dmitry Osipenko
2024-10-22 4:41 ` [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Kasireddy, Vivek
@ 2024-11-06 1:27 ` Rob Clark
2024-11-17 10:12 ` Dmitry Osipenko
2024-11-19 10:10 ` Dmitry Osipenko
4 siblings, 0 replies; 8+ messages in thread
From: Rob Clark @ 2024-11-06 1:27 UTC (permalink / raw)
To: Dmitry Osipenko
Cc: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Pierre-Eric Pelloux-Prayer, Kim Dongwon, Kasireddy Vivek,
dri-devel, virtualization, linux-kernel, kernel
On Sun, Oct 20, 2024 at 4:08 PM Dmitry Osipenko
<dmitry.osipenko@collabora.com> wrote:
>
> From: Dongwon Kim <dongwon.kim@intel.com>
>
> Use drm_gem_plane_helper_prepare_fb() helper for explicit framebuffer
> synchronization. We need to wait for explicit fences in a case of
> Venus and native contexts when guest user space uses explicit fencing.
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> [dmitry.osipenko@collabora.com>: Edit commit message]
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
> ---
> drivers/gpu/drm/virtio/virtgpu_plane.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
> index a72a2dbda031..ab7232921cb7 100644
> --- a/drivers/gpu/drm/virtio/virtgpu_plane.c
> +++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
> @@ -26,6 +26,7 @@
> #include <drm/drm_atomic_helper.h>
> #include <drm/drm_damage_helper.h>
> #include <drm/drm_fourcc.h>
> +#include <drm/drm_gem_atomic_helper.h>
>
> #include "virtgpu_drv.h"
>
> @@ -254,6 +255,9 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
>
> vgfb = to_virtio_gpu_framebuffer(new_state->fb);
> bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
> +
> + drm_gem_plane_helper_prepare_fb(plane, new_state);
> +
> if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
> return 0;
>
> --
> 2.47.0
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb()
2024-10-20 23:08 [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Dmitry Osipenko
` (2 preceding siblings ...)
2024-11-06 1:27 ` Rob Clark
@ 2024-11-17 10:12 ` Dmitry Osipenko
2024-11-19 10:10 ` Dmitry Osipenko
4 siblings, 0 replies; 8+ messages in thread
From: Dmitry Osipenko @ 2024-11-17 10:12 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Rob Clark, Pierre-Eric Pelloux-Prayer, Kim Dongwon,
Kasireddy Vivek
Cc: dri-devel, virtualization, linux-kernel, kernel
On 10/21/24 02:08, Dmitry Osipenko wrote:
> From: Dongwon Kim <dongwon.kim@intel.com>
>
> Use drm_gem_plane_helper_prepare_fb() helper for explicit framebuffer
> synchronization. We need to wait for explicit fences in a case of
> Venus and native contexts when guest user space uses explicit fencing.
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> [dmitry.osipenko@collabora.com>: Edit commit message]
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> drivers/gpu/drm/virtio/virtgpu_plane.c | 4 ++++
> 1 file changed, 4 insertions(+)
I'll apply this patchset tomorrow if nobody has more comments to add.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb()
2024-10-20 23:08 [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Dmitry Osipenko
` (3 preceding siblings ...)
2024-11-17 10:12 ` Dmitry Osipenko
@ 2024-11-19 10:10 ` Dmitry Osipenko
4 siblings, 0 replies; 8+ messages in thread
From: Dmitry Osipenko @ 2024-11-19 10:10 UTC (permalink / raw)
To: David Airlie, Gerd Hoffmann, Gurchetan Singh, Chia-I Wu,
Rob Clark, Pierre-Eric Pelloux-Prayer, Kim Dongwon,
Kasireddy Vivek
Cc: dri-devel, virtualization, linux-kernel, kernel
On 10/21/24 02:08, Dmitry Osipenko wrote:
> From: Dongwon Kim <dongwon.kim@intel.com>
>
> Use drm_gem_plane_helper_prepare_fb() helper for explicit framebuffer
> synchronization. We need to wait for explicit fences in a case of
> Venus and native contexts when guest user space uses explicit fencing.
>
> Signed-off-by: Dongwon Kim <dongwon.kim@intel.com>
> [dmitry.osipenko@collabora.com>: Edit commit message]
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Applied patchset to misc-next
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-11-19 10:11 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-20 23:08 [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Dmitry Osipenko
2024-10-20 23:08 ` [PATCH v3 2/2] drm/virtio: New fence for every plane update Dmitry Osipenko
2024-10-22 4:44 ` Kasireddy, Vivek
2024-10-25 17:59 ` Dmitry Osipenko
2024-10-22 4:41 ` [PATCH v3 1/2] drm/virtio: Use drm_gem_plane_helper_prepare_fb() Kasireddy, Vivek
2024-11-06 1:27 ` Rob Clark
2024-11-17 10:12 ` Dmitry Osipenko
2024-11-19 10:10 ` Dmitry Osipenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).