* [PATCH v9 01/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 02/11] virtio-gpu: Support context-init feature with virglrenderer Dmitry Osipenko
` (9 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
New virglrerenderer features were stabilized with release of v1.0.0.
Presence of symbols in virglrenderer.h doesn't guarantee ABI compatibility
with pre-release development versions of libvirglerender. Use virglrenderer
version to decide reliably which virgl features are available.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
meson.build | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/meson.build b/meson.build
index 553b9409995b..c6f22f565071 100644
--- a/meson.build
+++ b/meson.build
@@ -2286,11 +2286,8 @@ config_host_data.set('CONFIG_PNG', png.found())
config_host_data.set('CONFIG_VNC', vnc.found())
config_host_data.set('CONFIG_VNC_JPEG', jpeg.found())
config_host_data.set('CONFIG_VNC_SASL', sasl.found())
-if virgl.found()
- config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT',
- cc.has_member('struct virgl_renderer_resource_info_ext', 'd3d_tex2d',
- prefix: '#include <virglrenderer.h>',
- dependencies: virgl))
+if virgl.version().version_compare('>=1.0.0')
+ config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
endif
config_host_data.set('CONFIG_VIRTFS', have_virtfs)
config_host_data.set('CONFIG_VTE', vte.found())
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v9 02/11] virtio-gpu: Support context-init feature with virglrenderer
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 01/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 03/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Dmitry Osipenko
` (8 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Huang Rui <ray.huang@amd.com>
Patch "virtio-gpu: CONTEXT_INIT feature" has added the context_init
feature flags. Expose this feature and support creating virglrenderer
context with flags using context_id if libvirglrenderer is new enough.
Originally-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-gl.c | 4 ++++
hw/display/virtio-gpu-virgl.c | 20 ++++++++++++++++++--
meson.build | 1 +
3 files changed, 23 insertions(+), 2 deletions(-)
diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c
index e06be60dfbfc..ba478124e2c2 100644
--- a/hw/display/virtio-gpu-gl.c
+++ b/hw/display/virtio-gpu-gl.c
@@ -127,6 +127,10 @@ static void virtio_gpu_gl_device_realize(DeviceState *qdev, Error **errp)
VIRTIO_GPU_BASE(g)->virtio_config.num_capsets =
virtio_gpu_virgl_get_num_capsets(g);
+#ifdef HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS
+ g->parent_obj.conf.flags |= 1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED;
+#endif
+
virtio_gpu_device_realize(qdev, errp);
}
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 9f34d0e6619c..ef598d8d23ee 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -106,8 +106,24 @@ static void virgl_cmd_context_create(VirtIOGPU *g,
trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
cc.debug_name);
- virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen,
- cc.debug_name);
+ if (cc.context_init) {
+ if (!virtio_gpu_context_init_enabled(g->parent_obj.conf)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: context_init disabled",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ return;
+ }
+
+#ifdef HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS
+ virgl_renderer_context_create_with_flags(cc.hdr.ctx_id,
+ cc.context_init,
+ cc.nlen,
+ cc.debug_name);
+ return;
+#endif
+ }
+
+ virgl_renderer_context_create(cc.hdr.ctx_id, cc.nlen, cc.debug_name);
}
static void virgl_cmd_context_destroy(VirtIOGPU *g,
diff --git a/meson.build b/meson.build
index c6f22f565071..c131db46b2a6 100644
--- a/meson.build
+++ b/meson.build
@@ -2288,6 +2288,7 @@ config_host_data.set('CONFIG_VNC_JPEG', jpeg.found())
config_host_data.set('CONFIG_VNC_SASL', sasl.found())
if virgl.version().version_compare('>=1.0.0')
config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
+ config_host_data.set('HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS', 1)
endif
config_host_data.set('CONFIG_VIRTFS', have_virtfs)
config_host_data.set('CONFIG_VTE', vte.found())
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v9 03/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 01/11] virtio-gpu: Use pkgconfig version to decide which virgl features are available Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 02/11] virtio-gpu: Support context-init feature with virglrenderer Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 04/11] virtio-gpu: Add virgl resource management Dmitry Osipenko
` (7 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
The udmabuf usage is mandatory when virgl is disabled and blobs feature
enabled in the Qemu machine configuration. If virgl and blobs are enabled,
then udmabuf requirement is optional. Since udmabuf isn't widely supported
by a popular Linux distros today, let's relax the udmabuf requirement for
blobs=on,virgl=on. Now, a full-featured virtio-gpu acceleration is
available to Qemu users without a need to have udmabuf available in the
system.
Reviewed-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index ae831b6b3e3e..dac272ecadb1 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1472,6 +1472,7 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
if (!virtio_gpu_rutabaga_enabled(g->parent_obj.conf) &&
+ !virtio_gpu_virgl_enabled(g->parent_obj.conf) &&
!virtio_gpu_have_udmabuf()) {
error_setg(errp, "need rutabaga or udmabuf for blob resources");
return;
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v9 04/11] virtio-gpu: Add virgl resource management
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (2 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 03/11] virtio-gpu: Don't require udmabuf when blobs and virgl are enabled Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 05/11] virtio-gpu: Support blob scanout using dmabuf fd Dmitry Osipenko
` (6 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Huang Rui <ray.huang@amd.com>
In a preparation to adding host blobs support to virtio-gpu, add virgl
resource management that allows to retrieve resource based on its ID
and virgl resource wrapper on top of simple resource that will be contain
fields specific to virgl.
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-virgl.c | 74 +++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index ef598d8d23ee..7c7ffb0c251e 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -22,6 +22,23 @@
#include <virglrenderer.h>
+struct virtio_gpu_virgl_resource {
+ struct virtio_gpu_simple_resource base;
+};
+
+static struct virtio_gpu_virgl_resource *
+virtio_gpu_virgl_find_resource(VirtIOGPU *g, uint32_t resource_id)
+{
+ struct virtio_gpu_simple_resource *res;
+
+ res = virtio_gpu_find_resource(g, resource_id);
+ if (!res) {
+ return NULL;
+ }
+
+ return container_of(res, struct virtio_gpu_virgl_resource, base);
+}
+
#if VIRGL_RENDERER_CALLBACKS_VERSION >= 4
static void *
virgl_get_egl_display(G_GNUC_UNUSED void *cookie)
@@ -35,11 +52,34 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
{
struct virtio_gpu_resource_create_2d c2d;
struct virgl_renderer_resource_create_args args;
+ struct virtio_gpu_virgl_resource *res;
VIRTIO_GPU_FILL_CMD(c2d);
trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
c2d.width, c2d.height);
+ if (c2d.resource_id == 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = virtio_gpu_virgl_find_resource(g, c2d.resource_id);
+ if (res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+ __func__, c2d.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = g_new0(struct virtio_gpu_virgl_resource, 1);
+ res->base.width = c2d.width;
+ res->base.height = c2d.height;
+ res->base.format = c2d.format;
+ res->base.resource_id = c2d.resource_id;
+ QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
+
args.handle = c2d.resource_id;
args.target = 2;
args.format = c2d.format;
@@ -59,11 +99,34 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
{
struct virtio_gpu_resource_create_3d c3d;
struct virgl_renderer_resource_create_args args;
+ struct virtio_gpu_virgl_resource *res;
VIRTIO_GPU_FILL_CMD(c3d);
trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
c3d.width, c3d.height, c3d.depth);
+ if (c3d.resource_id == 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = virtio_gpu_virgl_find_resource(g, c3d.resource_id);
+ if (res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+ __func__, c3d.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = g_new0(struct virtio_gpu_virgl_resource, 1);
+ res->base.width = c3d.width;
+ res->base.height = c3d.height;
+ res->base.format = c3d.format;
+ res->base.resource_id = c3d.resource_id;
+ QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
+
args.handle = c3d.resource_id;
args.target = c3d.target;
args.format = c3d.format;
@@ -82,12 +145,19 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd)
{
struct virtio_gpu_resource_unref unref;
+ struct virtio_gpu_virgl_resource *res;
struct iovec *res_iovs = NULL;
int num_iovs = 0;
VIRTIO_GPU_FILL_CMD(unref);
trace_virtio_gpu_cmd_res_unref(unref.resource_id);
+ res = virtio_gpu_virgl_find_resource(g, unref.resource_id);
+ if (!res) {
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
virgl_renderer_resource_detach_iov(unref.resource_id,
&res_iovs,
&num_iovs);
@@ -95,6 +165,10 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
virtio_gpu_cleanup_mapping_iov(g, res_iovs, num_iovs);
}
virgl_renderer_resource_unref(unref.resource_id);
+
+ QTAILQ_REMOVE(&g->reslist, &res->base, next);
+
+ g_free(res);
}
static void virgl_cmd_context_create(VirtIOGPU *g,
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v9 05/11] virtio-gpu: Support blob scanout using dmabuf fd
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (3 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 04/11] virtio-gpu: Add virgl resource management Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 06/11] virtio-gpu: Support suspension of commands processing Dmitry Osipenko
` (5 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Robert Beckett <bob.beckett@collabora.com>
Support displaying blob resources by handling SET_SCANOUT_BLOB
command.
Signed-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Robert Beckett <bob.beckett@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Antonio Caggiano <quic_acaggian@quicinc.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-virgl.c | 109 +++++++++++++++++++++++++++++++++
hw/display/virtio-gpu.c | 12 ++--
include/hw/virtio/virtio-gpu.h | 7 +++
meson.build | 1 +
4 files changed, 123 insertions(+), 6 deletions(-)
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 7c7ffb0c251e..14e94a82dd6a 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -17,6 +17,8 @@
#include "trace.h"
#include "hw/virtio/virtio.h"
#include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-bswap.h"
+#include "hw/virtio/virtio-gpu-pixman.h"
#include "ui/egl-helpers.h"
@@ -78,6 +80,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
res->base.height = c2d.height;
res->base.format = c2d.format;
res->base.resource_id = c2d.resource_id;
+ res->base.dmabuf_fd = -1;
QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
args.handle = c2d.resource_id;
@@ -125,6 +128,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
res->base.height = c3d.height;
res->base.format = c3d.format;
res->base.resource_id = c3d.resource_id;
+ res->base.dmabuf_fd = -1;
QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
args.handle = c3d.resource_id;
@@ -507,6 +511,106 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
g_free(resp);
}
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_framebuffer fb = { 0 };
+ struct virgl_renderer_resource_info info;
+ struct virtio_gpu_virgl_resource *res;
+ struct virtio_gpu_set_scanout_blob ss;
+ uint64_t fbend;
+
+ VIRTIO_GPU_FILL_CMD(ss);
+ virtio_gpu_scanout_blob_bswap(&ss);
+ trace_virtio_gpu_cmd_set_scanout_blob(ss.scanout_id, ss.resource_id,
+ ss.r.width, ss.r.height, ss.r.x,
+ ss.r.y);
+
+ if (ss.scanout_id >= g->parent_obj.conf.max_outputs) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal scanout id specified %d",
+ __func__, ss.scanout_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_SCANOUT_ID;
+ return;
+ }
+
+ if (ss.resource_id == 0) {
+ virtio_gpu_disable_scanout(g, ss.scanout_id);
+ return;
+ }
+
+ if (ss.width < 16 ||
+ ss.height < 16 ||
+ ss.r.x + ss.r.width > ss.width ||
+ ss.r.y + ss.r.height > ss.height) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: illegal scanout %d bounds for"
+ " resource %d, rect (%d,%d)+%d,%d, fb %d %d\n",
+ __func__, ss.scanout_id, ss.resource_id,
+ ss.r.x, ss.r.y, ss.r.width, ss.r.height,
+ ss.width, ss.height);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+ return;
+ }
+
+ res = virtio_gpu_virgl_find_resource(g, ss.resource_id);
+ if (!res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+ __func__, ss.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+ if (virgl_renderer_resource_get_info(ss.resource_id, &info)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not have info %d\n",
+ __func__, ss.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+ if (res->base.dmabuf_fd < 0) {
+ res->base.dmabuf_fd = info.fd;
+ }
+ if (res->base.dmabuf_fd < 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource not backed by dmabuf %d\n",
+ __func__, ss.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ fb.format = virtio_gpu_get_pixman_format(ss.format);
+ if (!fb.format) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: pixel format not supported %d\n",
+ __func__, ss.format);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+ return;
+ }
+
+ fb.bytes_pp = DIV_ROUND_UP(PIXMAN_FORMAT_BPP(fb.format), 8);
+ fb.width = ss.width;
+ fb.height = ss.height;
+ fb.stride = ss.strides[0];
+ fb.offset = ss.offsets[0] + ss.r.x * fb.bytes_pp + ss.r.y * fb.stride;
+
+ fbend = fb.offset;
+ fbend += fb.stride * (ss.r.height - 1);
+ fbend += fb.bytes_pp * ss.r.width;
+ if (fbend > res->base.blob_size) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: fb end out of range\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+ return;
+ }
+
+ g->parent_obj.enable = 1;
+ if (virtio_gpu_update_dmabuf(g, ss.scanout_id, &res->base, &fb, &ss.r)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to update dmabuf\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+ return;
+ }
+
+ virtio_gpu_update_scanout(g, ss.scanout_id, &res->base, &fb, &ss.r);
+}
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+
void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd)
{
@@ -573,6 +677,11 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
case VIRTIO_GPU_CMD_GET_EDID:
virtio_gpu_get_edid(g, cmd);
break;
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+ case VIRTIO_GPU_CMD_SET_SCANOUT_BLOB:
+ virgl_cmd_set_scanout_blob(g, cmd);
+ break;
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
default:
cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index dac272ecadb1..1e57a53d346c 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -380,7 +380,7 @@ static void virtio_gpu_resource_create_blob(VirtIOGPU *g,
QTAILQ_INSERT_HEAD(&g->reslist, res, next);
}
-static void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id)
+void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id)
{
struct virtio_gpu_scanout *scanout = &g->parent_obj.scanout[scanout_id];
struct virtio_gpu_simple_resource *res;
@@ -597,11 +597,11 @@ static void virtio_unref_resource(pixman_image_t *image, void *data)
pixman_image_unref(data);
}
-static void virtio_gpu_update_scanout(VirtIOGPU *g,
- uint32_t scanout_id,
- struct virtio_gpu_simple_resource *res,
- struct virtio_gpu_framebuffer *fb,
- struct virtio_gpu_rect *r)
+void virtio_gpu_update_scanout(VirtIOGPU *g,
+ uint32_t scanout_id,
+ struct virtio_gpu_simple_resource *res,
+ struct virtio_gpu_framebuffer *fb,
+ struct virtio_gpu_rect *r)
{
struct virtio_gpu_simple_resource *ores;
struct virtio_gpu_scanout *scanout;
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index ed44cdad6b34..44c676c3ca4a 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -329,6 +329,13 @@ int virtio_gpu_update_dmabuf(VirtIOGPU *g,
struct virtio_gpu_framebuffer *fb,
struct virtio_gpu_rect *r);
+void virtio_gpu_update_scanout(VirtIOGPU *g,
+ uint32_t scanout_id,
+ struct virtio_gpu_simple_resource *res,
+ struct virtio_gpu_framebuffer *fb,
+ struct virtio_gpu_rect *r);
+void virtio_gpu_disable_scanout(VirtIOGPU *g, int scanout_id);
+
/* virtio-gpu-3d.c */
void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd);
diff --git a/meson.build b/meson.build
index c131db46b2a6..5ef50811b6ba 100644
--- a/meson.build
+++ b/meson.build
@@ -2289,6 +2289,7 @@ config_host_data.set('CONFIG_VNC_SASL', sasl.found())
if virgl.version().version_compare('>=1.0.0')
config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
config_host_data.set('HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS', 1)
+ config_host_data.set('HAVE_VIRGL_RESOURCE_BLOB', 1)
endif
config_host_data.set('CONFIG_VIRTFS', have_virtfs)
config_host_data.set('CONFIG_VTE', vte.found())
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v9 06/11] virtio-gpu: Support suspension of commands processing
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (4 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 05/11] virtio-gpu: Support blob scanout using dmabuf fd Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 07/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
` (4 subsequent siblings)
10 siblings, 0 replies; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
Add new "suspended" flag to virtio_gpu_ctrl_command telling cmd
processor that it should stop processing commands and retry again
next time until flag is unset.
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-gl.c | 1 +
hw/display/virtio-gpu-rutabaga.c | 1 +
hw/display/virtio-gpu-virgl.c | 3 +++
hw/display/virtio-gpu.c | 5 +++++
include/hw/virtio/virtio-gpu.h | 1 +
5 files changed, 11 insertions(+)
diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c
index ba478124e2c2..a8892bcc5346 100644
--- a/hw/display/virtio-gpu-gl.c
+++ b/hw/display/virtio-gpu-gl.c
@@ -79,6 +79,7 @@ static void virtio_gpu_gl_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
cmd->vq = vq;
cmd->error = 0;
cmd->finished = false;
+ cmd->suspended = false;
QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
}
diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
index 17bf701a2163..b6e84d436fb2 100644
--- a/hw/display/virtio-gpu-rutabaga.c
+++ b/hw/display/virtio-gpu-rutabaga.c
@@ -1061,6 +1061,7 @@ static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
cmd->vq = vq;
cmd->error = 0;
cmd->finished = false;
+ cmd->suspended = false;
QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
}
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 14e94a82dd6a..0feaa9f2c52e 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -687,6 +687,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
break;
}
+ if (cmd->suspended) {
+ return;
+ }
if (cmd->finished) {
return;
}
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 1e57a53d346c..a1bd4d6914c4 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1054,6 +1054,10 @@ void virtio_gpu_process_cmdq(VirtIOGPU *g)
/* process command */
vgc->process_cmd(g, cmd);
+ if (cmd->suspended) {
+ break;
+ }
+
QTAILQ_REMOVE(&g->cmdq, cmd, next);
if (virtio_gpu_stats_enabled(g->parent_obj.conf)) {
g->stats.requests++;
@@ -1113,6 +1117,7 @@ static void virtio_gpu_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
cmd->vq = vq;
cmd->error = 0;
cmd->finished = false;
+ cmd->suspended = false;
QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
}
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 44c676c3ca4a..dc24360656ce 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -132,6 +132,7 @@ struct virtio_gpu_ctrl_command {
struct virtio_gpu_ctrl_hdr cmd_hdr;
uint32_t error;
bool finished;
+ bool suspended;
QTAILQ_ENTRY(virtio_gpu_ctrl_command) next;
};
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* [PATCH v9 07/11] virtio-gpu: Handle resource blob commands
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (5 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 06/11] virtio-gpu: Support suspension of commands processing Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-27 6:57 ` Akihiko Odaki
2024-04-25 15:45 ` [PATCH v9 08/11] virtio-gpu: Resource UUID Dmitry Osipenko
` (3 subsequent siblings)
10 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Antonio Caggiano <antonio.caggiano@collabora.com>
Support BLOB resources creation, mapping and unmapping by calling the
new stable virglrenderer 0.10 interface. Only enabled when available and
via the blob config. E.g. -device virtio-vga-gl,blob=true
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-virgl.c | 268 ++++++++++++++++++++++++++++++++++
hw/display/virtio-gpu.c | 4 +-
2 files changed, 271 insertions(+), 1 deletion(-)
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 0feaa9f2c52e..73d4acbf1777 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -26,6 +26,8 @@
struct virtio_gpu_virgl_resource {
struct virtio_gpu_simple_resource base;
+ bool async_unmap_in_progress;
+ MemoryRegion *mr;
};
static struct virtio_gpu_virgl_resource *
@@ -49,6 +51,120 @@ virgl_get_egl_display(G_GNUC_UNUSED void *cookie)
}
#endif
+#ifdef HAVE_VIRGL_RESOURCE_BLOB
+struct virtio_gpu_virgl_hostmem_region {
+ MemoryRegion mr;
+ struct VirtIOGPU *g;
+ struct virtio_gpu_virgl_resource *res;
+};
+
+static void virtio_gpu_virgl_resume_cmdq(void *opaque)
+{
+ VirtIOGPU *g = opaque;
+
+ virtio_gpu_process_cmdq(g);
+}
+
+static void virtio_gpu_virgl_hostmem_region_free(void *obj)
+{
+ MemoryRegion *mr = MEMORY_REGION(obj);
+ struct virtio_gpu_virgl_hostmem_region *vmr;
+ VirtIOGPUBase *b;
+
+ vmr = container_of(mr, struct virtio_gpu_virgl_hostmem_region, mr);
+ vmr->res->async_unmap_in_progress = false;
+
+ b = VIRTIO_GPU_BASE(vmr->g);
+ b->renderer_blocked--;
+
+ /*
+ * memory_region_unref() may be executed from RCU thread context, while
+ * virglrenderer works only on the main-loop thread that's holding GL
+ * context.
+ */
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
+ virtio_gpu_virgl_resume_cmdq, vmr->g);
+ g_free(vmr);
+}
+
+static int
+virtio_gpu_virgl_map_resource_blob(VirtIOGPU *g,
+ struct virtio_gpu_virgl_resource *res,
+ uint64_t offset)
+{
+ struct virtio_gpu_virgl_hostmem_region *vmr;
+ VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
+ MemoryRegion *mr;
+ uint64_t size;
+ void *data;
+ int ret;
+
+ if (!virtio_gpu_hostmem_enabled(b->conf)) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: hostmem disabled\n", __func__);
+ return -EOPNOTSUPP;
+ }
+
+ ret = virgl_renderer_resource_map(res->base.resource_id, &data, &size);
+ if (ret) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map virgl resource\n",
+ __func__);
+ return -ret;
+ }
+
+ vmr = g_new0(struct virtio_gpu_virgl_hostmem_region, 1);
+ vmr->res = res;
+ vmr->g = g;
+
+ mr = &vmr->mr;
+ memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
+ memory_region_add_subregion(&b->hostmem, offset, mr);
+ memory_region_set_enabled(mr, true);
+
+ /*
+ * Potentially, MR could outlive the resource if MR's reference is held
+ * outside of virtio-gpu. In order to prevent unmapping resource while
+ * MR is alive, and thus, making the data pointer invalid, we will block
+ * virtio-gpu command processing until MR is fully unreferenced and
+ * released.
+ */
+ OBJECT(mr)->free = virtio_gpu_virgl_hostmem_region_free;
+
+ res->mr = mr;
+
+ return 0;
+}
+
+static bool
+virtio_gpu_virgl_unmap_resource_blob(VirtIOGPU *g,
+ struct virtio_gpu_virgl_resource *res)
+{
+ VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
+
+ if (!res->async_unmap_in_progress && res->mr) {
+ /* memory region owns self res->mr object and frees it by itself */
+ MemoryRegion *mr = res->mr;
+ res->mr = NULL;
+
+ res->async_unmap_in_progress = true;
+
+ /* render will be unblocked when MR is freed */
+ b->renderer_blocked++;
+
+ memory_region_set_enabled(mr, false);
+ memory_region_del_subregion(&b->hostmem, mr);
+ object_unparent(OBJECT(mr));
+ }
+
+ if (res->async_unmap_in_progress) {
+ return false;
+ }
+
+ virgl_renderer_resource_unmap(res->base.resource_id);
+
+ return true;
+}
+#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+
static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd)
{
@@ -162,6 +278,14 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
return;
}
+ if (res->mr || cmd->suspended) {
+ bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
+ cmd->suspended = !unmapped;
+ if (cmd->suspended) {
+ return;
+ }
+ }
+
virgl_renderer_resource_detach_iov(unref.resource_id,
&res_iovs,
&num_iovs);
@@ -512,6 +636,141 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
}
#ifdef HAVE_VIRGL_RESOURCE_BLOB
+static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
+ struct virtio_gpu_resource_create_blob cblob;
+ struct virtio_gpu_virgl_resource *res;
+ int ret;
+
+ if (!virtio_gpu_blob_enabled(g->parent_obj.conf)) {
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
+ return;
+ }
+
+ VIRTIO_GPU_FILL_CMD(cblob);
+ virtio_gpu_create_blob_bswap(&cblob);
+ trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+ if (cblob.resource_id == 0) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
+ __func__);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = virtio_gpu_virgl_find_resource(g, cblob.resource_id);
+ if (res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
+ __func__, cblob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ res = g_new0(struct virtio_gpu_virgl_resource, 1);
+ res->base.resource_id = cblob.resource_id;
+ res->base.blob_size = cblob.size;
+ res->base.dmabuf_fd = -1;
+
+ if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+ ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
+ cmd, &res->base.addrs,
+ &res->base.iov, &res->base.iov_cnt);
+ if (!ret) {
+ g_free(res);
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ return;
+ }
+ }
+
+ QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
+
+ virgl_args.res_handle = cblob.resource_id;
+ virgl_args.ctx_id = cblob.hdr.ctx_id;
+ virgl_args.blob_mem = cblob.blob_mem;
+ virgl_args.blob_id = cblob.blob_id;
+ virgl_args.blob_flags = cblob.blob_flags;
+ virgl_args.size = cblob.size;
+ virgl_args.iovecs = res->base.iov;
+ virgl_args.num_iovs = res->base.iov_cnt;
+
+ ret = virgl_renderer_resource_create_blob(&virgl_args);
+ if (ret) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
+ __func__, strerror(-ret));
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ }
+}
+
+static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_resource_map_blob mblob;
+ struct virtio_gpu_virgl_resource *res;
+ struct virtio_gpu_resp_map_info resp;
+ int ret;
+
+ VIRTIO_GPU_FILL_CMD(mblob);
+ virtio_gpu_map_blob_bswap(&mblob);
+
+ res = virtio_gpu_virgl_find_resource(g, mblob.resource_id);
+ if (!res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+ __func__, mblob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ if (res->mr) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
+ __func__, mblob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ ret = virtio_gpu_virgl_map_resource_blob(g, res, mblob.offset);
+ if (ret) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
+ __func__, strerror(ret));
+ cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
+ return;
+ }
+
+ memset(&resp, 0, sizeof(resp));
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+ virgl_renderer_resource_get_map_info(mblob.resource_id, &resp.map_info);
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void virgl_cmd_resource_unmap_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_resource_unmap_blob ublob;
+ struct virtio_gpu_virgl_resource *res;
+
+ VIRTIO_GPU_FILL_CMD(ublob);
+ virtio_gpu_unmap_blob_bswap(&ublob);
+
+ res = virtio_gpu_virgl_find_resource(g, ublob.resource_id);
+ if (!res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+ __func__, ublob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ if (!res->mr && !cmd->suspended) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already unmapped %d\n",
+ __func__, ublob.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
+ cmd->suspended = !unmapped;
+}
+
static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd)
{
@@ -678,6 +937,15 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
virtio_gpu_get_edid(g, cmd);
break;
#ifdef HAVE_VIRGL_RESOURCE_BLOB
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+ virgl_cmd_resource_create_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+ virgl_cmd_resource_map_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+ virgl_cmd_resource_unmap_blob(g, cmd);
+ break;
case VIRTIO_GPU_CMD_SET_SCANOUT_BLOB:
virgl_cmd_set_scanout_blob(g, cmd);
break;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index a1bd4d6914c4..45c1f2006712 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1483,10 +1483,12 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
return;
}
+#ifndef HAVE_VIRGL_RESOURCE_BLOB
if (virtio_gpu_virgl_enabled(g->parent_obj.conf)) {
- error_setg(errp, "blobs and virgl are not compatible (yet)");
+ error_setg(errp, "old virglrenderer, blob resources unsupported");
return;
}
+#endif
}
if (!virtio_gpu_base_device_realize(qdev,
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v9 07/11] virtio-gpu: Handle resource blob commands
2024-04-25 15:45 ` [PATCH v9 07/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
@ 2024-04-27 6:57 ` Akihiko Odaki
0 siblings, 0 replies; 21+ messages in thread
From: Akihiko Odaki @ 2024-04-27 6:57 UTC (permalink / raw)
To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 2024/04/26 0:45, Dmitry Osipenko wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>
> Support BLOB resources creation, mapping and unmapping by calling the
> new stable virglrenderer 0.10 interface. Only enabled when available and
> via the blob config. E.g. -device virtio-vga-gl,blob=true
>
> Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
> Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> hw/display/virtio-gpu-virgl.c | 268 ++++++++++++++++++++++++++++++++++
> hw/display/virtio-gpu.c | 4 +-
> 2 files changed, 271 insertions(+), 1 deletion(-)
>
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index 0feaa9f2c52e..73d4acbf1777 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -26,6 +26,8 @@
>
> struct virtio_gpu_virgl_resource {
> struct virtio_gpu_simple_resource base;
> + bool async_unmap_in_progress;
Why is this flag needed?
> + MemoryRegion *mr;
> };
>
> static struct virtio_gpu_virgl_resource *
> @@ -49,6 +51,120 @@ virgl_get_egl_display(G_GNUC_UNUSED void *cookie)
> }
> #endif
>
> +#ifdef HAVE_VIRGL_RESOURCE_BLOB
> +struct virtio_gpu_virgl_hostmem_region {
> + MemoryRegion mr;
> + struct VirtIOGPU *g;
> + struct virtio_gpu_virgl_resource *res;
> +};
> +
> +static void virtio_gpu_virgl_resume_cmdq(void *opaque)
> +{
> + VirtIOGPU *g = opaque;
> +
> + virtio_gpu_process_cmdq(g);
> +}
> +
> +static void virtio_gpu_virgl_hostmem_region_free(void *obj)
> +{
> + MemoryRegion *mr = MEMORY_REGION(obj);
> + struct virtio_gpu_virgl_hostmem_region *vmr;
> + VirtIOGPUBase *b;
> +
> + vmr = container_of(mr, struct virtio_gpu_virgl_hostmem_region, mr);
> + vmr->res->async_unmap_in_progress = false;
> +
> + b = VIRTIO_GPU_BASE(vmr->g);
> + b->renderer_blocked--;
> +
> + /*
> + * memory_region_unref() may be executed from RCU thread context, while
> + * virglrenderer works only on the main-loop thread that's holding GL
> + * context.
> + */
> + aio_bh_schedule_oneshot(qemu_get_aio_context(),
> + virtio_gpu_virgl_resume_cmdq, vmr->g);
Use aio_bh_new() and qemu_bh_schedule() instead to save one-time bottom
half allocation.
> + g_free(vmr);
> +}
> +
> +static int
> +virtio_gpu_virgl_map_resource_blob(VirtIOGPU *g,
> + struct virtio_gpu_virgl_resource *res,
> + uint64_t offset)
> +{
> + struct virtio_gpu_virgl_hostmem_region *vmr;
> + VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> + MemoryRegion *mr;
> + uint64_t size;
> + void *data;
> + int ret;
> +
> + if (!virtio_gpu_hostmem_enabled(b->conf)) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: hostmem disabled\n", __func__);
> + return -EOPNOTSUPP; > + }
> +
> + ret = virgl_renderer_resource_map(res->base.resource_id, &data, &size);
> + if (ret) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: failed to map virgl resource\n",
> + __func__);
Print strerror(-ret) here instead as printing strerror(EOPNOTSUPP) helps
little when !virtio_gpu_hostmem_enabled(b->conf).
> + return -ret;
> + }
> +
> + vmr = g_new0(struct virtio_gpu_virgl_hostmem_region, 1);
> + vmr->res = res;
> + vmr->g = g;
> +
> + mr = &vmr->mr;
> + memory_region_init_ram_ptr(mr, OBJECT(mr), "blob", size, data);
> + memory_region_add_subregion(&b->hostmem, offset, mr);
> + memory_region_set_enabled(mr, true);
> +
> + /*
> + * Potentially, MR could outlive the resource if MR's reference is held
> + * outside of virtio-gpu. In order to prevent unmapping resource while
> + * MR is alive, and thus, making the data pointer invalid, we will block
> + * virtio-gpu command processing until MR is fully unreferenced and
> + * released.
> + */
> + OBJECT(mr)->free = virtio_gpu_virgl_hostmem_region_free;
> +
> + res->mr = mr;
> +
> + return 0;
> +}
> +
> +static bool
> +virtio_gpu_virgl_unmap_resource_blob(VirtIOGPU *g,
> + struct virtio_gpu_virgl_resource *res)
> +{
> + VirtIOGPUBase *b = VIRTIO_GPU_BASE(g);
> +
> + if (!res->async_unmap_in_progress && res->mr) {
> + /* memory region owns self res->mr object and frees it by itself */
> + MemoryRegion *mr = res->mr;
> + res->mr = NULL;
> +
> + res->async_unmap_in_progress = true;
> +
> + /* render will be unblocked when MR is freed */
> + b->renderer_blocked++;
> +
> + memory_region_set_enabled(mr, false);
> + memory_region_del_subregion(&b->hostmem, mr);
> + object_unparent(OBJECT(mr));
> + }
> +
> + if (res->async_unmap_in_progress) {
> + return false;
> + }
> +
> + virgl_renderer_resource_unmap(res->base.resource_id);
> +
> + return true;
> +}
> +#endif /* HAVE_VIRGL_RESOURCE_BLOB */
> +
> static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
> struct virtio_gpu_ctrl_command *cmd)
> {
> @@ -162,6 +278,14 @@ static void virgl_cmd_resource_unref(VirtIOGPU *g,
> return;
> }
>
> + if (res->mr || cmd->suspended) {
> + bool unmapped = virtio_gpu_virgl_unmap_resource_blob(g, res);
> + cmd->suspended = !unmapped;
> + if (cmd->suspended) {
> + return;
> + }
> + }
> +
> virgl_renderer_resource_detach_iov(unref.resource_id,
> &res_iovs,
> &num_iovs);
> @@ -512,6 +636,141 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
> }
>
> #ifdef HAVE_VIRGL_RESOURCE_BLOB
> +static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct virgl_renderer_resource_create_blob_args virgl_args = { 0 };
> + struct virtio_gpu_resource_create_blob cblob;
> + struct virtio_gpu_virgl_resource *res;
> + int ret;
> +
> + if (!virtio_gpu_blob_enabled(g->parent_obj.conf)) {
> + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
> + return;
> + }
> +
> + VIRTIO_GPU_FILL_CMD(cblob);
> + virtio_gpu_create_blob_bswap(&cblob);
> + trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> +
> + if (cblob.resource_id == 0) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: resource id 0 is not allowed\n",
> + __func__);
> + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> + return;
> + }
> +
> + res = virtio_gpu_virgl_find_resource(g, cblob.resource_id);
> + if (res) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already exists %d\n",
> + __func__, cblob.resource_id);
> + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> + return;
> + }
> +
> + res = g_new0(struct virtio_gpu_virgl_resource, 1);
> + res->base.resource_id = cblob.resource_id;
> + res->base.blob_size = cblob.size;
> + res->base.dmabuf_fd = -1;
> +
> + if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> + ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
> + cmd, &res->base.addrs,
> + &res->base.iov, &res->base.iov_cnt);
> + if (!ret) {
> + g_free(res);
> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> + return;
> + }
> + }
> +
> + QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
> +
> + virgl_args.res_handle = cblob.resource_id;
> + virgl_args.ctx_id = cblob.hdr.ctx_id;
> + virgl_args.blob_mem = cblob.blob_mem;
> + virgl_args.blob_id = cblob.blob_id;
> + virgl_args.blob_flags = cblob.blob_flags;
> + virgl_args.size = cblob.size;
> + virgl_args.iovecs = res->base.iov;
> + virgl_args.num_iovs = res->base.iov_cnt;
> +
> + ret = virgl_renderer_resource_create_blob(&virgl_args);
> + if (ret) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: virgl blob create error: %s\n",
> + __func__, strerror(-ret));
> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> + }
> +}
> +
> +static void virgl_cmd_resource_map_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct virtio_gpu_resource_map_blob mblob;
> + struct virtio_gpu_virgl_resource *res;
> + struct virtio_gpu_resp_map_info resp;
> + int ret;
> +
> + VIRTIO_GPU_FILL_CMD(mblob);
> + virtio_gpu_map_blob_bswap(&mblob);
> +
> + res = virtio_gpu_virgl_find_resource(g, mblob.resource_id);
> + if (!res) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
> + __func__, mblob.resource_id);
> + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> + return;
> + }
> +
> + if (res->mr) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: resource already mapped %d\n",
> + __func__, mblob.resource_id);
> + cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
> + return;
> + }
> +
> + ret = virtio_gpu_virgl_map_resource_blob(g, res, mblob.offset);
> + if (ret) {
> + qemu_log_mask(LOG_GUEST_ERROR, "%s: resource map error: %s\n",
> + __func__, strerror(ret));
> + cmd->error = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
I think it's better to use VIRTIO_GPU_RESP_ERR_UNSPEC here; we don't
know if the error is out-of-memory or something else.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v9 08/11] virtio-gpu: Resource UUID
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (6 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 07/11] virtio-gpu: Handle resource blob commands Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-27 7:01 ` Akihiko Odaki
2024-04-25 15:45 ` [PATCH v9 09/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
` (2 subsequent siblings)
10 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Antonio Caggiano <antonio.caggiano@collabora.com>
Enable resource UUID feature and implement command resource assign UUID.
UUID feature availability is mandatory for Vulkan Venus context.
UUID is intended for sharing dmabufs between virtio devices on host. Qemu
doesn't have second virtio device for sharing, thus a simple stub UUID
implementation is enough. More complete implementation using global UUID
resource table might become interesting for a multi-gpu cases.
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/trace-events | 1 +
hw/display/virtio-gpu-base.c | 3 +++
hw/display/virtio-gpu-virgl.c | 33 +++++++++++++++++++++++++++++++++
3 files changed, 37 insertions(+)
diff --git a/hw/display/trace-events b/hw/display/trace-events
index 2336a0ca1570..54d6894c59f4 100644
--- a/hw/display/trace-events
+++ b/hw/display/trace-events
@@ -41,6 +41,7 @@ virtio_gpu_cmd_res_create_blob(uint32_t res, uint64_t size) "res 0x%x, size %" P
virtio_gpu_cmd_res_unref(uint32_t res) "res 0x%x"
virtio_gpu_cmd_res_back_attach(uint32_t res) "res 0x%x"
virtio_gpu_cmd_res_back_detach(uint32_t res) "res 0x%x"
+virtio_gpu_cmd_res_assign_uuid(uint32_t res) "res 0x%x"
virtio_gpu_cmd_res_xfer_toh_2d(uint32_t res) "res 0x%x"
virtio_gpu_cmd_res_xfer_toh_3d(uint32_t res) "res 0x%x"
virtio_gpu_cmd_res_xfer_fromh_3d(uint32_t res) "res 0x%x"
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 4fc7ef8896c1..13014b9a73eb 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -226,6 +226,9 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
virtio_gpu_rutabaga_enabled(g->conf)) {
features |= (1 << VIRTIO_GPU_F_VIRGL);
}
+ if (virtio_gpu_virgl_enabled(g->conf)) {
+ features |= (1 << VIRTIO_GPU_F_RESOURCE_UUID);
+ }
if (virtio_gpu_edid_enabled(g->conf)) {
features |= (1 << VIRTIO_GPU_F_EDID);
}
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 73d4acbf1777..de788df155bf 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -14,6 +14,7 @@
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "qemu/iov.h"
+#include "qemu/uuid.h"
#include "trace.h"
#include "hw/virtio/virtio.h"
#include "hw/virtio/virtio-gpu.h"
@@ -28,6 +29,7 @@ struct virtio_gpu_virgl_resource {
struct virtio_gpu_simple_resource base;
bool async_unmap_in_progress;
MemoryRegion *mr;
+ QemuUUID uuid;
};
static struct virtio_gpu_virgl_resource *
@@ -197,6 +199,7 @@ static void virgl_cmd_create_resource_2d(VirtIOGPU *g,
res->base.format = c2d.format;
res->base.resource_id = c2d.resource_id;
res->base.dmabuf_fd = -1;
+ qemu_uuid_generate(&res->uuid);
QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
args.handle = c2d.resource_id;
@@ -245,6 +248,7 @@ static void virgl_cmd_create_resource_3d(VirtIOGPU *g,
res->base.format = c3d.format;
res->base.resource_id = c3d.resource_id;
res->base.dmabuf_fd = -1;
+ qemu_uuid_generate(&res->uuid);
QTAILQ_INSERT_HEAD(&g->reslist, &res->base, next);
args.handle = c3d.resource_id;
@@ -672,6 +676,7 @@ static void virgl_cmd_resource_create_blob(VirtIOGPU *g,
res->base.resource_id = cblob.resource_id;
res->base.blob_size = cblob.size;
res->base.dmabuf_fd = -1;
+ qemu_uuid_generate(&res->uuid);
if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
ret = virtio_gpu_create_mapping_iov(g, cblob.nr_entries, sizeof(cblob),
@@ -870,6 +875,31 @@ static void virgl_cmd_set_scanout_blob(VirtIOGPU *g,
}
#endif /* HAVE_VIRGL_RESOURCE_BLOB */
+static void virgl_cmd_assign_uuid(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_resource_assign_uuid assign;
+ struct virtio_gpu_resp_resource_uuid resp;
+ struct virtio_gpu_virgl_resource *res;
+
+ VIRTIO_GPU_FILL_CMD(assign);
+ virtio_gpu_bswap_32(&assign, sizeof(assign));
+ trace_virtio_gpu_cmd_res_assign_uuid(assign.resource_id);
+
+ res = virtio_gpu_virgl_find_resource(g, assign.resource_id);
+ if (!res) {
+ qemu_log_mask(LOG_GUEST_ERROR, "%s: resource does not exist %d\n",
+ __func__, assign.resource_id);
+ cmd->error = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
+ return;
+ }
+
+ memset(&resp, 0, sizeof(resp));
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_RESOURCE_UUID;
+ memcpy(resp.uuid, res->uuid.data, sizeof(resp.uuid));
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd)
{
@@ -924,6 +954,9 @@ void virtio_gpu_virgl_process_cmd(VirtIOGPU *g,
/* TODO add security */
virgl_cmd_ctx_detach_resource(g, cmd);
break;
+ case VIRTIO_GPU_CMD_RESOURCE_ASSIGN_UUID:
+ virgl_cmd_assign_uuid(g, cmd);
+ break;
case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
virgl_cmd_get_capset_info(g, cmd);
break;
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v9 08/11] virtio-gpu: Resource UUID
2024-04-25 15:45 ` [PATCH v9 08/11] virtio-gpu: Resource UUID Dmitry Osipenko
@ 2024-04-27 7:01 ` Akihiko Odaki
0 siblings, 0 replies; 21+ messages in thread
From: Akihiko Odaki @ 2024-04-27 7:01 UTC (permalink / raw)
To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 2024/04/26 0:45, Dmitry Osipenko wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>
> Enable resource UUID feature and implement command resource assign UUID.
> UUID feature availability is mandatory for Vulkan Venus context.
>
> UUID is intended for sharing dmabufs between virtio devices on host. Qemu
> doesn't have second virtio device for sharing, thus a simple stub UUID
> implementation is enough. More complete implementation using global UUID
> resource table might become interesting for a multi-gpu cases.
This message needs to be updated to clarify that a VM can have a second
virtio-gpu device but this implementation does not support sharing
between two virtio-gpu devices.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v9 09/11] virtio-gpu: Register capsets dynamically
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (7 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 08/11] virtio-gpu: Resource UUID Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-27 7:12 ` Akihiko Odaki
2024-04-25 15:45 ` [PATCH v9 10/11] virtio-gpu: Support Venus context Dmitry Osipenko
2024-04-25 15:45 ` [PATCH v9 11/11] migration/virtio: Add virtio-gpu section Dmitry Osipenko
10 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
virtio_gpu_virgl_get_num_capsets will return "num_capsets", but we can't
assume that capset_index 1 is always VIRGL2 once we'll support more capsets,
like Venus and DRM capsets. Register capsets dynamically to avoid that problem.
Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-virgl.c | 34 +++++++++++++++++++++++-----------
include/hw/virtio/virtio-gpu.h | 2 ++
2 files changed, 25 insertions(+), 11 deletions(-)
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index de788df155bf..9aa1fd78f1e1 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -597,19 +597,13 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
VIRTIO_GPU_FILL_CMD(info);
memset(&resp, 0, sizeof(resp));
- if (info.capset_index == 0) {
- resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL;
- virgl_renderer_get_cap_set(resp.capset_id,
- &resp.capset_max_version,
- &resp.capset_max_size);
- } else if (info.capset_index == 1) {
- resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL2;
+
+ if (info.capset_index < g->capset_ids->len) {
+ resp.capset_id = g_array_index(g->capset_ids, uint32_t,
+ info.capset_index);
virgl_renderer_get_cap_set(resp.capset_id,
&resp.capset_max_version,
&resp.capset_max_size);
- } else {
- resp.capset_max_version = 0;
- resp.capset_max_size = 0;
}
resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
@@ -1159,12 +1153,30 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
return 0;
}
+static void virtio_gpu_virgl_add_capset(VirtIOGPU *g, uint32_t capset_id)
+{
+ g_array_append_val(g->capset_ids, capset_id);
+}
+
int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
{
uint32_t capset2_max_ver, capset2_max_size;
+
+ if (g->capset_ids) {
+ return g->capset_ids->len;
+ }
+
+ g->capset_ids = g_array_new(false, false, sizeof(uint32_t));
+
+ /* VIRGL is always supported. */
+ virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL);
+
virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
&capset2_max_ver,
&capset2_max_size);
+ if (capset2_max_ver) {
+ virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL2);
+ }
- return capset2_max_ver ? 2 : 1;
+ return g->capset_ids->len;
}
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index dc24360656ce..32f38d86c908 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -211,6 +211,8 @@ struct VirtIOGPU {
QTAILQ_HEAD(, VGPUDMABuf) bufs;
VGPUDMABuf *primary[VIRTIO_GPU_MAX_SCANOUTS];
} dmabuf;
+
+ GArray *capset_ids;
};
struct VirtIOGPUClass {
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v9 09/11] virtio-gpu: Register capsets dynamically
2024-04-25 15:45 ` [PATCH v9 09/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
@ 2024-04-27 7:12 ` Akihiko Odaki
2024-05-01 19:31 ` Dmitry Osipenko
0 siblings, 1 reply; 21+ messages in thread
From: Akihiko Odaki @ 2024-04-27 7:12 UTC (permalink / raw)
To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 2024/04/26 0:45, Dmitry Osipenko wrote:
> From: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
>
> virtio_gpu_virgl_get_num_capsets will return "num_capsets", but we can't
> assume that capset_index 1 is always VIRGL2 once we'll support more capsets,
> like Venus and DRM capsets. Register capsets dynamically to avoid that problem.
>
> Signed-off-by: Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> hw/display/virtio-gpu-virgl.c | 34 +++++++++++++++++++++++-----------
> include/hw/virtio/virtio-gpu.h | 2 ++
> 2 files changed, 25 insertions(+), 11 deletions(-)
>
> diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
> index de788df155bf..9aa1fd78f1e1 100644
> --- a/hw/display/virtio-gpu-virgl.c
> +++ b/hw/display/virtio-gpu-virgl.c
> @@ -597,19 +597,13 @@ static void virgl_cmd_get_capset_info(VirtIOGPU *g,
> VIRTIO_GPU_FILL_CMD(info);
>
> memset(&resp, 0, sizeof(resp));
> - if (info.capset_index == 0) {
> - resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL;
> - virgl_renderer_get_cap_set(resp.capset_id,
> - &resp.capset_max_version,
> - &resp.capset_max_size);
> - } else if (info.capset_index == 1) {
> - resp.capset_id = VIRTIO_GPU_CAPSET_VIRGL2;
> +
> + if (info.capset_index < g->capset_ids->len) {
> + resp.capset_id = g_array_index(g->capset_ids, uint32_t,
> + info.capset_index);
> virgl_renderer_get_cap_set(resp.capset_id,
> &resp.capset_max_version,
> &resp.capset_max_size);
> - } else {
> - resp.capset_max_version = 0;
> - resp.capset_max_size = 0;
> }
> resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
> virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> @@ -1159,12 +1153,30 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
> return 0;
> }
>
> +static void virtio_gpu_virgl_add_capset(VirtIOGPU *g, uint32_t capset_id)
> +{
> + g_array_append_val(g->capset_ids, capset_id);
> +}
> +
> int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
> {
> uint32_t capset2_max_ver, capset2_max_size;
> +
> + if (g->capset_ids) {
Move capset_ids initialization to virtio_gpu_virgl_init() to save this
conditional. capset_ids also needs to be freed when the device gets
unrealized.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v9 09/11] virtio-gpu: Register capsets dynamically
2024-04-27 7:12 ` Akihiko Odaki
@ 2024-05-01 19:31 ` Dmitry Osipenko
2024-05-01 19:38 ` Dmitry Osipenko
0 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-05-01 19:31 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 4/27/24 10:12, Akihiko Odaki wrote:
>> int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>> {
>> uint32_t capset2_max_ver, capset2_max_size;
>> +
>> + if (g->capset_ids) {
>
> Move capset_ids initialization to virtio_gpu_virgl_init() to save this
> conditional.
Capsets are used before virgl is inited. At first guest queries virtio
device features and then enables virgl only if capset is available.
While virgl itself is initialized when first virtio command is
processed. I.e. it's not possible to move to virtio_gpu_virgl_init.
> capset_ids also needs to be freed when the device gets
> unrealized.
ACK
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v9 09/11] virtio-gpu: Register capsets dynamically
2024-05-01 19:31 ` Dmitry Osipenko
@ 2024-05-01 19:38 ` Dmitry Osipenko
2024-05-01 19:52 ` Dmitry Osipenko
0 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-05-01 19:38 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 5/1/24 22:31, Dmitry Osipenko wrote:
> On 4/27/24 10:12, Akihiko Odaki wrote:
>>> int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>>> {
>>> uint32_t capset2_max_ver, capset2_max_size;
>>> +
>>> + if (g->capset_ids) {
>>
>> Move capset_ids initialization to virtio_gpu_virgl_init() to save this
>> conditional.
>
> Capsets are used before virgl is inited. At first guest queries virtio
> device features and then enables virgl only if capset is available.
> While virgl itself is initialized when first virtio command is
> processed. I.e. it's not possible to move to virtio_gpu_virgl_init.
Though no, capsets aren't part of device features. I'll move it to
virtio_gpu_virgl_init, thanks.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v9 09/11] virtio-gpu: Register capsets dynamically
2024-05-01 19:38 ` Dmitry Osipenko
@ 2024-05-01 19:52 ` Dmitry Osipenko
2024-05-03 7:32 ` Akihiko Odaki
0 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-05-01 19:52 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 5/1/24 22:38, Dmitry Osipenko wrote:
> On 5/1/24 22:31, Dmitry Osipenko wrote:
>> On 4/27/24 10:12, Akihiko Odaki wrote:
>>>> int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>>>> {
>>>> uint32_t capset2_max_ver, capset2_max_size;
>>>> +
>>>> + if (g->capset_ids) {
>>>
>>> Move capset_ids initialization to virtio_gpu_virgl_init() to save this
>>> conditional.
>>
>> Capsets are used before virgl is inited. At first guest queries virtio
>> device features and then enables virgl only if capset is available.
>> While virgl itself is initialized when first virtio command is
>> processed. I.e. it's not possible to move to virtio_gpu_virgl_init.
>
> Though no, capsets aren't part of device features. I'll move it to
> virtio_gpu_virgl_init, thanks.
>
Number of capsets actually is a part of generic virtio device cfg
descriptor. Capsets initialization can't be moved without probing
capsets twice, i.e. not worthwhile.
--
Best regards,
Dmitry
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [PATCH v9 09/11] virtio-gpu: Register capsets dynamically
2024-05-01 19:52 ` Dmitry Osipenko
@ 2024-05-03 7:32 ` Akihiko Odaki
0 siblings, 0 replies; 21+ messages in thread
From: Akihiko Odaki @ 2024-05-03 7:32 UTC (permalink / raw)
To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 2024/05/02 4:52, Dmitry Osipenko wrote:
> On 5/1/24 22:38, Dmitry Osipenko wrote:
>> On 5/1/24 22:31, Dmitry Osipenko wrote:
>>> On 4/27/24 10:12, Akihiko Odaki wrote:
>>>>> int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
>>>>> {
>>>>> uint32_t capset2_max_ver, capset2_max_size;
>>>>> +
>>>>> + if (g->capset_ids) {
>>>>
>>>> Move capset_ids initialization to virtio_gpu_virgl_init() to save this
>>>> conditional.
>>>
>>> Capsets are used before virgl is inited. At first guest queries virtio
>>> device features and then enables virgl only if capset is available.
>>> While virgl itself is initialized when first virtio command is
>>> processed. I.e. it's not possible to move to virtio_gpu_virgl_init.
>>
>> Though no, capsets aren't part of device features. I'll move it to
>> virtio_gpu_virgl_init, thanks.
>>
>
> Number of capsets actually is a part of generic virtio device cfg
> descriptor. Capsets initialization can't be moved without probing
> capsets twice, i.e. not worthwhile.
>
I see. Then I suggest replacing virtio_gpu_virgl_get_num_capsets() with
a function that returns GArray of capset IDs.
virtio_gpu_gl_device_realize() will assign the returned GArray to
g->capset_ids. virtio_gpu_gl_device_unrealize(), which doesn't exist
yet, will free g->capset_ids later.
This way, you won't need the conditional, and it will be clear that a
GArray allocation happens in virtio_gpu_gl_device_realize() and is
matched with the deallocation in virtio_gpu_gl_device_unrealize().
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v9 10/11] virtio-gpu: Support Venus context
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (8 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 09/11] virtio-gpu: Register capsets dynamically Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-27 7:29 ` Akihiko Odaki
2024-04-25 15:45 ` [PATCH v9 11/11] migration/virtio: Add virtio-gpu section Dmitry Osipenko
10 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
From: Antonio Caggiano <antonio.caggiano@collabora.com>
Request Venus when initializing VirGL and if vulkan=true flag is set for
virtio-gpu device.
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
hw/display/virtio-gpu-gl.c | 2 ++
hw/display/virtio-gpu-virgl.c | 22 ++++++++++++++++++----
hw/display/virtio-gpu.c | 13 +++++++++++++
include/hw/virtio/virtio-gpu.h | 3 +++
meson.build | 1 +
5 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c
index a8892bcc5346..8e475d28f857 100644
--- a/hw/display/virtio-gpu-gl.c
+++ b/hw/display/virtio-gpu-gl.c
@@ -138,6 +138,8 @@ static void virtio_gpu_gl_device_realize(DeviceState *qdev, Error **errp)
static Property virtio_gpu_gl_properties[] = {
DEFINE_PROP_BIT("stats", VirtIOGPU, parent_obj.conf.flags,
VIRTIO_GPU_FLAG_STATS_ENABLED, false),
+ DEFINE_PROP_BIT("vulkan", VirtIOGPU, parent_obj.conf.flags,
+ VIRTIO_GPU_FLAG_VENUS_ENABLED, false),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/hw/display/virtio-gpu-virgl.c b/hw/display/virtio-gpu-virgl.c
index 9aa1fd78f1e1..969272315c2a 100644
--- a/hw/display/virtio-gpu-virgl.c
+++ b/hw/display/virtio-gpu-virgl.c
@@ -1135,6 +1135,11 @@ int virtio_gpu_virgl_init(VirtIOGPU *g)
flags |= VIRGL_RENDERER_D3D11_SHARE_TEXTURE;
}
#endif
+#ifdef VIRGL_RENDERER_VENUS
+ if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
+ flags |= VIRGL_RENDERER_VENUS | VIRGL_RENDERER_RENDER_SERVER;
+ }
+#endif
ret = virgl_renderer_init(g, flags, &virtio_gpu_3d_cbs);
if (ret != 0) {
@@ -1160,7 +1165,7 @@ static void virtio_gpu_virgl_add_capset(VirtIOGPU *g, uint32_t capset_id)
int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
{
- uint32_t capset2_max_ver, capset2_max_size;
+ uint32_t capset_max_ver, capset_max_size;
if (g->capset_ids) {
return g->capset_ids->len;
@@ -1172,11 +1177,20 @@ int virtio_gpu_virgl_get_num_capsets(VirtIOGPU *g)
virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL);
virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VIRGL2,
- &capset2_max_ver,
- &capset2_max_size);
- if (capset2_max_ver) {
+ &capset_max_ver,
+ &capset_max_size);
+ if (capset_max_ver) {
virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VIRGL2);
}
+ if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
+ virgl_renderer_get_cap_set(VIRTIO_GPU_CAPSET_VENUS,
+ &capset_max_ver,
+ &capset_max_size);
+ if (capset_max_size) {
+ virtio_gpu_virgl_add_capset(g, VIRTIO_GPU_CAPSET_VENUS);
+ }
+ }
+
return g->capset_ids->len;
}
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 45c1f2006712..e86326b25a72 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1491,6 +1491,19 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
#endif
}
+ if (virtio_gpu_venus_enabled(g->parent_obj.conf)) {
+#ifdef HAVE_VIRGL_VENUS
+ if (!virtio_gpu_blob_enabled(g->parent_obj.conf) ||
+ !virtio_gpu_hostmem_enabled(g->parent_obj.conf)) {
+ error_setg(errp, "venus requires enabled blob and hostmem options");
+ return;
+ }
+#else
+ error_setg(errp, "old virglrenderer, venus unsupported");
+ return;
+#endif
+ }
+
if (!virtio_gpu_base_device_realize(qdev,
virtio_gpu_handle_ctrl_cb,
virtio_gpu_handle_cursor_cb,
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 32f38d86c908..7af81131499c 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -99,6 +99,7 @@ enum virtio_gpu_base_conf_flags {
VIRTIO_GPU_FLAG_BLOB_ENABLED,
VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
VIRTIO_GPU_FLAG_RUTABAGA_ENABLED,
+ VIRTIO_GPU_FLAG_VENUS_ENABLED,
};
#define virtio_gpu_virgl_enabled(_cfg) \
@@ -117,6 +118,8 @@ enum virtio_gpu_base_conf_flags {
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED))
#define virtio_gpu_hostmem_enabled(_cfg) \
(_cfg.hostmem > 0)
+#define virtio_gpu_venus_enabled(_cfg) \
+ (_cfg.flags & (1 << VIRTIO_GPU_FLAG_VENUS_ENABLED))
struct virtio_gpu_base_conf {
uint32_t max_outputs;
diff --git a/meson.build b/meson.build
index 5ef50811b6ba..4e03349c9d10 100644
--- a/meson.build
+++ b/meson.build
@@ -2290,6 +2290,7 @@ if virgl.version().version_compare('>=1.0.0')
config_host_data.set('HAVE_VIRGL_D3D_INFO_EXT', 1)
config_host_data.set('HAVE_VIRGL_CONTEXT_CREATE_WITH_FLAGS', 1)
config_host_data.set('HAVE_VIRGL_RESOURCE_BLOB', 1)
+ config_host_data.set('HAVE_VIRGL_VENUS', 1)
endif
config_host_data.set('CONFIG_VIRTFS', have_virtfs)
config_host_data.set('CONFIG_VTE', vte.found())
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v9 10/11] virtio-gpu: Support Venus context
2024-04-25 15:45 ` [PATCH v9 10/11] virtio-gpu: Support Venus context Dmitry Osipenko
@ 2024-04-27 7:29 ` Akihiko Odaki
0 siblings, 0 replies; 21+ messages in thread
From: Akihiko Odaki @ 2024-04-27 7:29 UTC (permalink / raw)
To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 2024/04/26 0:45, Dmitry Osipenko wrote:
> From: Antonio Caggiano <antonio.caggiano@collabora.com>
>
> Request Venus when initializing VirGL and if vulkan=true flag is set for
> virtio-gpu device.
Naming it vulkan is a bit confusing as there is also GFXSTREAM_VULKAN
capset though virgl does not support it. I think you can just name it venus.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [PATCH v9 11/11] migration/virtio: Add virtio-gpu section
2024-04-25 15:45 [PATCH v9 00/11] Support blob memory and venus on qemu Dmitry Osipenko
` (9 preceding siblings ...)
2024-04-25 15:45 ` [PATCH v9 10/11] virtio-gpu: Support Venus context Dmitry Osipenko
@ 2024-04-25 15:45 ` Dmitry Osipenko
2024-04-27 7:27 ` Akihiko Odaki
10 siblings, 1 reply; 21+ messages in thread
From: Dmitry Osipenko @ 2024-04-25 15:45 UTC (permalink / raw)
To: Akihiko Odaki, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
Document virtio-gpu migration specifics.
Suggested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
---
docs/devel/migration/virtio.rst | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/docs/devel/migration/virtio.rst b/docs/devel/migration/virtio.rst
index 611a18b82151..67f5fcfed196 100644
--- a/docs/devel/migration/virtio.rst
+++ b/docs/devel/migration/virtio.rst
@@ -113,3 +113,10 @@ virtio_load() returned (like e.g. code depending on features).
Any extension of the state being migrated should be done in subsections
added to the core for compatibility reasons. If transport or device specific
state is added, core needs to invoke a callback from the new subsection.
+
+VirtIO-GPU migration
+====================
+VirtIO-GPU doesn't adhere to a common virtio migration scheme. It doesn't
+support save/loading of virtio device state, instead it uses generic device
+migration management on top of the virtio core to save/load GPU state.
+Migration of virgl and rutabaga states not supported.
--
2.44.0
^ permalink raw reply related [flat|nested] 21+ messages in thread
* Re: [PATCH v9 11/11] migration/virtio: Add virtio-gpu section
2024-04-25 15:45 ` [PATCH v9 11/11] migration/virtio: Add virtio-gpu section Dmitry Osipenko
@ 2024-04-27 7:27 ` Akihiko Odaki
0 siblings, 0 replies; 21+ messages in thread
From: Akihiko Odaki @ 2024-04-27 7:27 UTC (permalink / raw)
To: Dmitry Osipenko, Huang Rui, Marc-André Lureau,
Philippe Mathieu-Daudé, Gerd Hoffmann, Michael S . Tsirkin,
Stefano Stabellini, Anthony PERARD, Antonio Caggiano,
Dr . David Alan Gilbert, Robert Beckett, Gert Wollny,
Alex Bennée
Cc: qemu-devel, Gurchetan Singh, ernunes, Alyssa Ross,
Roger Pau Monné, Alex Deucher, Stefano Stabellini,
Christian König, Xenia Ragiadakou,
Pierre-Eric Pelloux-Prayer, Honglei Huang, Julia Zhang,
Chen Jiqian, Yiwei Zhang
On 2024/04/26 0:45, Dmitry Osipenko wrote:
> Document virtio-gpu migration specifics.
>
> Suggested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
> Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
> ---
> docs/devel/migration/virtio.rst | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/docs/devel/migration/virtio.rst b/docs/devel/migration/virtio.rst
> index 611a18b82151..67f5fcfed196 100644
> --- a/docs/devel/migration/virtio.rst
> +++ b/docs/devel/migration/virtio.rst
> @@ -113,3 +113,10 @@ virtio_load() returned (like e.g. code depending on features).
> Any extension of the state being migrated should be done in subsections
> added to the core for compatibility reasons. If transport or device specific
> state is added, core needs to invoke a callback from the new subsection.
> +
> +VirtIO-GPU migration
> +====================
> +VirtIO-GPU doesn't adhere to a common virtio migration scheme. It doesn't
> +support save/loading of virtio device state, instead it uses generic device
> +migration management on top of the virtio core to save/load GPU state.
> +Migration of virgl and rutabaga states not supported.
Sorry for confusion, but I didn't mean to add a subsection to the
documentation. I intended to refer to a terminology of migration data
structure named subsection, which is documented at:
docs/devel/migration/main.rst
A device-specific information is not worth to describe here.
^ permalink raw reply [flat|nested] 21+ messages in thread