* [PATCH v1 0/9] gfxstream + rutabaga_gfx
@ 2023-07-11 2:56 Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 1/9] virtio: Add shared memory capability Gurchetan Singh
` (9 more replies)
0 siblings, 10 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
From: Gurchetan Singh <gurchetansingh@google.com>
Latest iteration of rutabaga_gfx + gfxstream patches. Previous version
and more background available here:
https://patchew.org/QEMU/20230421011223.718-1-gurchetansingh@chromium.org/
Changes since RFC:
- All important memory tests pass
- Went with separate virtio-gpu-rutabaga device as suggested by Bernard
Berschow
- Incorporated review feedback, mostly from Akihiko Odaki
- gfxstream has new unified guest/host repo + build system improvements
- added documentation on virtio-gpu
- new instructions on how to build available in the tracking bug [a]
In terms of API stability/versioning/packaging, once this series is
reviewed, the plan is to cut a "gfxstream upstream release branch". We
will have the same API guarantees as any other QEMU project then, i.e no
breaking API changes for 5 years.
The Android Emulator will build both gfxstream (to get bug fixes fast)
and QEMU8.0+ (due to regulatory requirements) from sources. So we haven't
created a gfxstream Debian/Ubuntu package since we actually don't need it.
Though, we plan to upload our QEMU8.0+ gfxstream enabled builds somewhere
on AOSP when it's ready.
It's more important for us to be in-tree to reduce technical debt given
this. Let us know if there are any strong opinions on packaging.
Otherwise, feedback + reviews welcome!
[a] https://gitlab.com/qemu-project/qemu/-/issues/1611
Antonio Caggiano (2):
virtio-gpu: CONTEXT_INIT feature
virtio-gpu: blob prep
Dr. David Alan Gilbert (1):
virtio: Add shared memory capability
Gerd Hoffmann (1):
virtio-gpu: hostmem
Gurchetan Singh (5):
gfxstream + rutabaga prep: added need defintions, fields, and options
gfxstream + rutabaga: add initial support for gfxstream
gfxstream + rutabaga: meson support
gfxstream + rutabaga: enable rutabaga
docs/system: add basic virtio-gpu documentation
docs/system/device-emulation.rst | 1 +
docs/system/devices/virtio-gpu.rst | 80 ++
hw/display/meson.build | 22 +
hw/display/virtio-gpu-base.c | 6 +-
hw/display/virtio-gpu-pci-rutabaga.c | 48 ++
hw/display/virtio-gpu-pci.c | 14 +
hw/display/virtio-gpu-rutabaga.c | 1088 ++++++++++++++++++++++++++
hw/display/virtio-gpu.c | 17 +-
hw/display/virtio-vga-rutabaga.c | 52 ++
hw/display/virtio-vga.c | 33 +-
hw/virtio/virtio-pci.c | 18 +
include/hw/virtio/virtio-gpu-bswap.h | 18 +
include/hw/virtio/virtio-gpu.h | 34 +
include/hw/virtio/virtio-pci.h | 4 +
meson.build | 7 +
meson_options.txt | 2 +
scripts/meson-buildoptions.sh | 3 +
softmmu/qdev-monitor.c | 3 +
softmmu/vl.c | 1 +
19 files changed, 1431 insertions(+), 20 deletions(-)
create mode 100644 docs/system/devices/virtio-gpu.rst
create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
create mode 100644 hw/display/virtio-gpu-rutabaga.c
create mode 100644 hw/display/virtio-vga-rutabaga.c
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v1 1/9] virtio: Add shared memory capability
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 2/9] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
` (8 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Define a new capability type 'VIRTIO_PCI_CAP_SHARED_MEMORY_CFG' to allow
defining shared memory regions with sizes and offsets of 2^32 and more.
Multiple instances of the capability are allowed and distinguished
by a device-specific 'id'.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Reviewed-by: Gurchetan Singh <gurchetansingh@chromium.org>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
---
hw/virtio/virtio-pci.c | 18 ++++++++++++++++++
include/hw/virtio/virtio-pci.h | 4 ++++
2 files changed, 22 insertions(+)
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index edbc0daa18..da8c9ea12d 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1435,6 +1435,24 @@ static int virtio_pci_add_mem_cap(VirtIOPCIProxy *proxy,
return offset;
}
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
+ uint8_t bar, uint64_t offset, uint64_t length,
+ uint8_t id)
+{
+ struct virtio_pci_cap64 cap = {
+ .cap.cap_len = sizeof cap,
+ .cap.cfg_type = VIRTIO_PCI_CAP_SHARED_MEMORY_CFG,
+ };
+
+ cap.cap.bar = bar;
+ cap.cap.length = cpu_to_le32(length);
+ cap.length_hi = cpu_to_le32(length >> 32);
+ cap.cap.offset = cpu_to_le32(offset);
+ cap.offset_hi = cpu_to_le32(offset >> 32);
+ cap.cap.id = id;
+ return virtio_pci_add_mem_cap(proxy, &cap.cap);
+}
+
static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
unsigned size)
{
diff --git a/include/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
index ab2051b64b..5a3f182f99 100644
--- a/include/hw/virtio/virtio-pci.h
+++ b/include/hw/virtio/virtio-pci.h
@@ -264,4 +264,8 @@ unsigned virtio_pci_optimal_num_queues(unsigned fixed_queues);
void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
int n, bool assign,
bool with_irqfd);
+
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy, uint8_t bar, uint64_t offset,
+ uint64_t length, uint8_t id);
+
#endif
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 2/9] virtio-gpu: CONTEXT_INIT feature
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 1/9] virtio: Add shared memory capability Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 3/9] virtio-gpu: hostmem Gurchetan Singh
` (7 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
From: Antonio Caggiano <antonio.caggiano@collabora.com>
The feature can be enabled when a backend wants it.
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com>
---
hw/display/virtio-gpu-base.c | 3 +++
include/hw/virtio/virtio-gpu.h | 3 +++
2 files changed, 6 insertions(+)
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index a29f191aa8..6c5f1f327f 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -215,6 +215,9 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
if (virtio_gpu_blob_enabled(g->conf)) {
features |= (1 << VIRTIO_GPU_F_RESOURCE_BLOB);
}
+ if (virtio_gpu_context_init_enabled(g->conf)) {
+ features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
+ }
return features;
}
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 7a5f8056ea..8f9b3e4ac6 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -93,6 +93,7 @@ enum virtio_gpu_base_conf_flags {
VIRTIO_GPU_FLAG_EDID_ENABLED,
VIRTIO_GPU_FLAG_DMABUF_ENABLED,
VIRTIO_GPU_FLAG_BLOB_ENABLED,
+ VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
};
#define virtio_gpu_virgl_enabled(_cfg) \
@@ -105,6 +106,8 @@ enum virtio_gpu_base_conf_flags {
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_DMABUF_ENABLED))
#define virtio_gpu_blob_enabled(_cfg) \
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
+#define virtio_gpu_context_init_enabled(_cfg) \
+ (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
struct virtio_gpu_base_conf {
uint32_t max_outputs;
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 3/9] virtio-gpu: hostmem
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 1/9] virtio: Add shared memory capability Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 2/9] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 4/9] virtio-gpu: blob prep Gurchetan Singh
` (6 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
From: Gerd Hoffmann <kraxel@redhat.com>
Use VIRTIO_GPU_SHM_ID_HOST_VISIBLE as id for virtio-gpu.
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
hw/display/virtio-gpu-pci.c | 14 ++++++++++++++
hw/display/virtio-gpu.c | 1 +
hw/display/virtio-vga.c | 33 ++++++++++++++++++++++++---------
include/hw/virtio/virtio-gpu.h | 5 +++++
4 files changed, 44 insertions(+), 9 deletions(-)
diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index 93f214ff58..da6a99f038 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -33,6 +33,20 @@ static void virtio_gpu_pci_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
DeviceState *vdev = DEVICE(g);
int i;
+ if (virtio_gpu_hostmem_enabled(g->conf)) {
+ vpci_dev->msix_bar_idx = 1;
+ vpci_dev->modern_mem_bar_idx = 2;
+ memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+ g->conf.hostmem);
+ pci_register_bar(&vpci_dev->pci_dev, 4,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &g->hostmem);
+ virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+ VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+ }
+
virtio_pci_force_virtio_1(vpci_dev);
if (!qdev_realize(vdev, BUS(&vpci_dev->bus), errp)) {
return;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 347e17d490..23ef371da7 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1487,6 +1487,7 @@ static Property virtio_gpu_properties[] = {
256 * MiB),
DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
+ DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index e6fb0aa876..c8552ff760 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -115,17 +115,32 @@ static void virtio_vga_base_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
pci_register_bar(&vpci_dev->pci_dev, 0,
PCI_BASE_ADDRESS_MEM_PREFETCH, &vga->vram);
- /*
- * Configure virtio bar and regions
- *
- * We use bar #2 for the mmio regions, to be compatible with stdvga.
- * virtio regions are moved to the end of bar #2, to make room for
- * the stdvga mmio registers at the start of bar #2.
- */
- vpci_dev->modern_mem_bar_idx = 2;
- vpci_dev->msix_bar_idx = 4;
vpci_dev->modern_io_bar_idx = 5;
+ if (!virtio_gpu_hostmem_enabled(g->conf)) {
+ /*
+ * Configure virtio bar and regions
+ *
+ * We use bar #2 for the mmio regions, to be compatible with stdvga.
+ * virtio regions are moved to the end of bar #2, to make room for
+ * the stdvga mmio registers at the start of bar #2.
+ */
+ vpci_dev->modern_mem_bar_idx = 2;
+ vpci_dev->msix_bar_idx = 4;
+ } else {
+ vpci_dev->msix_bar_idx = 1;
+ vpci_dev->modern_mem_bar_idx = 2;
+ memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+ g->conf.hostmem);
+ pci_register_bar(&vpci_dev->pci_dev, 4,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &g->hostmem);
+ virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+ VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+ }
+
if (!(vpci_dev->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ)) {
/*
* with page-per-vq=off there is no padding space we can use
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 8f9b3e4ac6..1b16412f43 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -108,12 +108,15 @@ enum virtio_gpu_base_conf_flags {
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
#define virtio_gpu_context_init_enabled(_cfg) \
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
+#define virtio_gpu_hostmem_enabled(_cfg) \
+ (_cfg.hostmem > 0)
struct virtio_gpu_base_conf {
uint32_t max_outputs;
uint32_t flags;
uint32_t xres;
uint32_t yres;
+ uint64_t hostmem;
};
struct virtio_gpu_ctrl_command {
@@ -137,6 +140,8 @@ struct VirtIOGPUBase {
int renderer_blocked;
int enable;
+ MemoryRegion hostmem;
+
struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
int enabled_output_bitmask;
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 4/9] virtio-gpu: blob prep
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (2 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 3/9] virtio-gpu: hostmem Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
` (5 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
From: Antonio Caggiano <antonio.caggiano@collabora.com>
This adds preparatory functions needed to:
- decode blob cmds
- tracking iovecs
Signed-off-by: Antonio Caggiano <antonio.caggiano@collabora.com>
Signed-off-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
---
hw/display/virtio-gpu.c | 11 +++--------
include/hw/virtio/virtio-gpu-bswap.h | 18 ++++++++++++++++++
include/hw/virtio/virtio-gpu.h | 5 +++++
3 files changed, 26 insertions(+), 8 deletions(-)
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 23ef371da7..32da46fefc 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -33,16 +33,11 @@
#define VIRTIO_GPU_VM_VERSION 1
-static struct virtio_gpu_simple_resource*
-virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
static struct virtio_gpu_simple_resource *
virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
bool require_backing,
const char *caller, uint32_t *error);
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
- struct virtio_gpu_simple_resource *res);
-
void virtio_gpu_update_cursor_data(VirtIOGPU *g,
struct virtio_gpu_scanout *s,
uint32_t resource_id)
@@ -115,7 +110,7 @@ static void update_cursor(VirtIOGPU *g, struct virtio_gpu_update_cursor *cursor)
cursor->resource_id ? 1 : 0);
}
-static struct virtio_gpu_simple_resource *
+struct virtio_gpu_simple_resource *
virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id)
{
struct virtio_gpu_simple_resource *res;
@@ -919,8 +914,8 @@ void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
g_free(iov);
}
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
- struct virtio_gpu_simple_resource *res)
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+ struct virtio_gpu_simple_resource *res)
{
virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
res->iov = NULL;
diff --git a/include/hw/virtio/virtio-gpu-bswap.h b/include/hw/virtio/virtio-gpu-bswap.h
index 9124108485..dd1975e2d4 100644
--- a/include/hw/virtio/virtio-gpu-bswap.h
+++ b/include/hw/virtio/virtio-gpu-bswap.h
@@ -63,10 +63,28 @@ virtio_gpu_create_blob_bswap(struct virtio_gpu_resource_create_blob *cblob)
{
virtio_gpu_ctrl_hdr_bswap(&cblob->hdr);
le32_to_cpus(&cblob->resource_id);
+ le32_to_cpus(&cblob->blob_mem);
le32_to_cpus(&cblob->blob_flags);
+ le32_to_cpus(&cblob->nr_entries);
+ le64_to_cpus(&cblob->blob_id);
le64_to_cpus(&cblob->size);
}
+static inline void
+virtio_gpu_map_blob_bswap(struct virtio_gpu_resource_map_blob *mblob)
+{
+ virtio_gpu_ctrl_hdr_bswap(&mblob->hdr);
+ le32_to_cpus(&mblob->resource_id);
+ le64_to_cpus(&mblob->offset);
+}
+
+static inline void
+virtio_gpu_unmap_blob_bswap(struct virtio_gpu_resource_unmap_blob *ublob)
+{
+ virtio_gpu_ctrl_hdr_bswap(&ublob->hdr);
+ le32_to_cpus(&ublob->resource_id);
+}
+
static inline void
virtio_gpu_scanout_blob_bswap(struct virtio_gpu_set_scanout_blob *ssb)
{
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 1b16412f43..5927ca1864 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -251,6 +251,9 @@ void virtio_gpu_base_fill_display_info(VirtIOGPUBase *g,
struct virtio_gpu_resp_display_info *dpy_info);
/* virtio-gpu.c */
+struct virtio_gpu_simple_resource *
+virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
+
void virtio_gpu_ctrl_response(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd,
struct virtio_gpu_ctrl_hdr *resp,
@@ -269,6 +272,8 @@ int virtio_gpu_create_mapping_iov(VirtIOGPU *g,
uint32_t *niov);
void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
struct iovec *iov, uint32_t count);
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+ struct virtio_gpu_simple_resource *res);
void virtio_gpu_process_cmdq(VirtIOGPU *g);
void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
void virtio_gpu_reset(VirtIODevice *vdev);
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (3 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 4/9] virtio-gpu: blob prep Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-12 11:36 ` Akihiko Odaki
2023-07-11 2:56 ` [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
` (4 subsequent siblings)
9 siblings, 1 reply; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
This modifies the common virtio-gpu.h file have the fields and
defintions needed by gfxstream/rutabaga, by VirtioGpuRutabaga.
- a colon separated list of capset names, defined in the virtio spec
- a wayland socket path to enable guest Wayland passthrough
The command to run these would be:
-device virtio-vga-rutabaga,capset_names=gfxstream:cross-domain, \
wayland_socket_path=/run/user/1000/wayland-0,hostmem=8G \
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
---
v2: void *rutabaga --> struct rutabaga *rutabaga (Akihiko)
have a separate rutabaga device instead of using GL device (Bernard)
include/hw/virtio/virtio-gpu.h | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 5927ca1864..5a1b15ccb9 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -38,6 +38,9 @@ OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPUGL, VIRTIO_GPU_GL)
#define TYPE_VHOST_USER_GPU "vhost-user-gpu"
OBJECT_DECLARE_SIMPLE_TYPE(VhostUserGPU, VHOST_USER_GPU)
+#define TYPE_VIRTIO_GPU_RUTABAGA "virtio-gpu-rutabaga-device"
+OBJECT_DECLARE_SIMPLE_TYPE(VirtioGpuRutabaga, VIRTIO_GPU_RUTABAGA)
+
struct virtio_gpu_simple_resource {
uint32_t resource_id;
uint32_t width;
@@ -94,6 +97,7 @@ enum virtio_gpu_base_conf_flags {
VIRTIO_GPU_FLAG_DMABUF_ENABLED,
VIRTIO_GPU_FLAG_BLOB_ENABLED,
VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
+ VIRTIO_GPU_FLAG_RUTABAGA_ENABLED,
};
#define virtio_gpu_virgl_enabled(_cfg) \
@@ -108,6 +112,8 @@ enum virtio_gpu_base_conf_flags {
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
#define virtio_gpu_context_init_enabled(_cfg) \
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
+#define virtio_gpu_rutabaga_enabled(_cfg) \
+ (_cfg.flags & (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED))
#define virtio_gpu_hostmem_enabled(_cfg) \
(_cfg.hostmem > 0)
@@ -229,6 +235,21 @@ struct VhostUserGPU {
bool backend_blocked;
};
+struct rutabaga;
+
+struct VirtioGpuRutabaga {
+ struct VirtIOGPU parent_obj;
+
+ bool rutabaga_active;
+ char *capset_names;
+ char *wayland_socket_path;
+ char *wsi;
+ bool headless;
+ uint32_t num_capsets;
+ struct rutabaga *rutabaga;
+ AioContext *ctx;
+};
+
#define VIRTIO_GPU_FILL_CMD(out) do { \
size_t s; \
s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, 0, \
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (4 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-12 12:31 ` Akihiko Odaki
` (2 more replies)
2023-07-11 2:56 ` [PATCH v1 7/9] gfxstream + rutabaga: meson support Gurchetan Singh
` (3 subsequent siblings)
9 siblings, 3 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
This adds initial support for gfxstream and cross-domain. Both
features rely on virtio-gpu blob resources and context types, which
are also implemented in this patch.
gfxstream has a long and illustrious history in Android graphics
paravirtualization. It has been powering graphics in the Android
Studio Emulator for more than a decade, which is the main developer
platform.
Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
The key design characteristic was a 1:1 threading model and
auto-generation, which fit nicely with the OpenGLES spec. It also
allowed easy layering with ANGLE on the host, which provides the GLES
implementations on Windows or MacOS enviroments.
gfxstream has traditionally been maintained by a single engineer, and
between 2015 to 2021, the goldfish throne passed to Frank Yang.
Historians often remark this glorious reign ("pax gfxstreama" is the
academic term) was comparable to that of Augustus and the both Queen
Elizabeths. Just to name a few accomplishments in a resplendent
panoply: higher versions of GLES, address space graphics, snapshot
support and CTS compliant Vulkan [b].
One major drawback was the use of out-of-tree goldfish drivers.
Android engineers didn't know much about DRM/KMS and especially TTM so
a simple guest to host pipe was conceived.
Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
the Mesa/virglrenderer communities. In 2018, the initial virtio-gpu
port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
It was a symbol compatible replacement of virglrenderer [c] and named
"AVDVirglrenderer". This implementation forms the basis of the
current gfxstream host implementation still in use today.
cross-domain support follows a similar arc. Originally conceived by
Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
2018, it initially relied on the downstream "virtio-wl" device.
In 2020 and 2021, virtio-gpu was extended to include blob resources
and multiple timelines by yours truly, features gfxstream/cross-domain
both require to function correctly.
Right now, we stand at the precipice of a truly fantastic possibility:
the Android Emulator powered by upstream QEMU and upstream Linux
kernel. gfxstream will then be packaged properfully, and app
developers can even fix gfxstream bugs on their own if they encounter
them.
It's been quite the ride, my friends. Where will gfxstream head next,
nobody really knows. I wouldn't be surprised if it's around for
another decade, maintained by a new generation of Android graphics
enthusiasts.
Technical details:
- Very simple initial display integration: just used Pixman
- Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
calls
[a] https://android-review.googlesource.com/c/platform/development/+/34470
[b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
[c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
---
v2: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
- Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
- Used error_report(..)
- Used g_autofree to fix leaks on error paths
- Removed unnecessary casts
- added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
hw/display/virtio-gpu-pci-rutabaga.c | 48 ++
hw/display/virtio-gpu-rutabaga.c | 1088 ++++++++++++++++++++++++++
hw/display/virtio-vga-rutabaga.c | 52 ++
3 files changed, 1188 insertions(+)
create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
create mode 100644 hw/display/virtio-gpu-rutabaga.c
create mode 100644 hw/display/virtio-vga-rutabaga.c
diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
new file mode 100644
index 0000000000..5765bef266
--- /dev/null
+++ b/hw/display/virtio-gpu-pci-rutabaga.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-bus.h"
+#include "hw/virtio/virtio-gpu-pci.h"
+#include "qom/object.h"
+
+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
+typedef struct VirtIOGPURUTABAGAPCI VirtIOGPURUTABAGAPCI;
+DECLARE_INSTANCE_CHECKER(VirtIOGPURUTABAGAPCI, VIRTIO_GPU_RUTABAGA_PCI,
+ TYPE_VIRTIO_GPU_RUTABAGA_PCI)
+
+struct VirtIOGPURUTABAGAPCI {
+ VirtIOGPUPCIBase parent_obj;
+ VirtioGpuRutabaga vdev;
+};
+
+static void virtio_gpu_rutabaga_initfn(Object *obj)
+{
+ VirtIOGPURUTABAGAPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
+
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+ TYPE_VIRTIO_GPU_RUTABAGA);
+ VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
+ .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
+ .parent = TYPE_VIRTIO_GPU_PCI_BASE,
+ .instance_size = sizeof(VirtIOGPURUTABAGAPCI),
+ .instance_init = virtio_gpu_rutabaga_initfn,
+};
+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
+module_kconfig(VIRTIO_PCI);
+
+static void virtio_gpu_rutabaga_pci_register_types(void)
+{
+ virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
+}
+
+type_init(virtio_gpu_rutabaga_pci_register_types)
+
+module_dep("hw-display-virtio-gpu-pci");
diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
new file mode 100644
index 0000000000..b60a30a093
--- /dev/null
+++ b/hw/display/virtio-gpu-rutabaga.c
@@ -0,0 +1,1088 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "qemu/osdep.h"
+#include "qemu/error-report.h"
+#include "qemu/iov.h"
+#include "trace.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-pixman.h"
+#include "hw/virtio/virtio-iommu.h"
+
+#include <glib/gmem.h>
+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
+
+#define CHECK(condition, cmd) \
+ do { \
+ if (!condition) { \
+ error_report("CHECK failed in %s() %s:" "%d", __func__, \
+ __FILE__, __LINE__); \
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; \
+ return; \
+ } \
+ } while (0)
+
+#define CHECK_RESULT(result, cmd) CHECK(result == 0, cmd)
+
+#define MAX_SLOTS 4096
+
+struct MemoryRegionInfo {
+ int used;
+ MemoryRegion mr;
+ uint32_t resource_id;
+};
+
+static struct MemoryRegionInfo memory_regions[MAX_SLOTS];
+
+struct rutabaga_aio_data {
+ struct VirtioGpuRutabaga *vr;
+ struct rutabaga_fence fence;
+};
+
+static void
+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
+ uint32_t resource_id)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct rutabaga_transfer transfer = { 0 };
+ struct iovec transfer_iovec;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ res = virtio_gpu_find_resource(g, resource_id);
+ if (!res) {
+ return;
+ }
+
+ if (res->width != s->current_cursor->width ||
+ res->height != s->current_cursor->height) {
+ return;
+ }
+
+ transfer.x = 0;
+ transfer.y = 0;
+ transfer.z = 0;
+ transfer.w = res->width;
+ transfer.h = res->height;
+ transfer.d = 1;
+
+ transfer_iovec.iov_base = (void *)s->current_cursor->data;
+ transfer_iovec.iov_len = res->width * res->height * 4;
+
+ rutabaga_resource_transfer_read(vr->rutabaga, 0,
+ resource_id, &transfer,
+ &transfer_iovec);
+}
+
+static void
+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
+{
+ VirtIOGPU *g = VIRTIO_GPU(b);
+ virtio_gpu_process_cmdq(g);
+}
+
+static void
+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_create_3d rc_3d = { 0 };
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_create_2d c2d;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(c2d);
+ trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
+ c2d.width, c2d.height);
+
+ rc_3d.target = 2;
+ rc_3d.format = c2d.format;
+ rc_3d.bind = (1 << 1);
+ rc_3d.width = c2d.width;
+ rc_3d.height = c2d.height;
+ rc_3d.depth = 1;
+ rc_3d.array_size = 1;
+ rc_3d.last_level = 0;
+ rc_3d.nr_samples = 0;
+ rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
+
+ result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
+ CHECK_RESULT(result, cmd);
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+ res->width = c2d.width;
+ res->height = c2d.height;
+ res->format = c2d.format;
+ res->resource_id = c2d.resource_id;
+
+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_create_3d rc_3d = { 0 };
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_create_3d c3d;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(c3d);
+
+ trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
+ c3d.width, c3d.height, c3d.depth);
+
+ rc_3d.target = c3d.target;
+ rc_3d.format = c3d.format;
+ rc_3d.bind = c3d.bind;
+ rc_3d.width = c3d.width;
+ rc_3d.height = c3d.height;
+ rc_3d.depth = c3d.depth;
+ rc_3d.array_size = c3d.array_size;
+ rc_3d.last_level = c3d.last_level;
+ rc_3d.nr_samples = c3d.nr_samples;
+ rc_3d.flags = c3d.flags;
+
+ result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
+ CHECK_RESULT(result, cmd);
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+ res->width = c3d.width;
+ res->height = c3d.height;
+ res->format = c3d.format;
+ res->resource_id = c3d.resource_id;
+
+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+rutabaga_cmd_resource_unref(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_unref unref;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(unref);
+
+ trace_virtio_gpu_cmd_res_unref(unref.resource_id);
+
+ res = virtio_gpu_find_resource(g, unref.resource_id);
+ CHECK(res, cmd);
+
+ result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
+ CHECK_RESULT(result, cmd);
+
+ if (res->image) {
+ pixman_image_unref(res->image);
+ }
+
+ QTAILQ_REMOVE(&g->reslist, res, next);
+ g_free(res);
+}
+
+static void
+rutabaga_cmd_context_create(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_create cc;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cc);
+ trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
+ cc.debug_name);
+
+ result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
+ cc.context_init, cc.debug_name, cc.nlen);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_context_destroy(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_destroy cd;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cd);
+ trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
+
+ result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result, i;
+ struct virtio_gpu_scanout *scanout = NULL;
+ struct virtio_gpu_simple_resource *res;
+ struct rutabaga_transfer transfer = { 0 };
+ struct iovec transfer_iovec;
+ struct virtio_gpu_resource_flush rf;
+ bool found = false;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ if (vr->headless) {
+ return;
+ }
+
+ VIRTIO_GPU_FILL_CMD(rf);
+ trace_virtio_gpu_cmd_res_flush(rf.resource_id,
+ rf.r.width, rf.r.height, rf.r.x, rf.r.y);
+
+ res = virtio_gpu_find_resource(g, rf.resource_id);
+ CHECK(res, cmd);
+
+ for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
+ scanout = &g->parent_obj.scanout[i];
+ if (i == res->scanout_bitmask) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found) {
+ return;
+ }
+
+ transfer.x = 0;
+ transfer.y = 0;
+ transfer.z = 0;
+ transfer.w = res->width;
+ transfer.h = res->height;
+ transfer.d = 1;
+
+ transfer_iovec.iov_base = (void *)pixman_image_get_data(res->image);
+ transfer_iovec.iov_len = res->width * res->height * 4;
+
+ result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
+ rf.resource_id, &transfer,
+ &transfer_iovec);
+ CHECK_RESULT(result, cmd);
+ dpy_gfx_update_full(scanout->con);
+}
+
+static void
+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_scanout *scanout = NULL;
+ struct virtio_gpu_set_scanout ss;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ if (vr->headless) {
+ return;
+ }
+
+ VIRTIO_GPU_FILL_CMD(ss);
+ trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
+ ss.r.width, ss.r.height, ss.r.x, ss.r.y);
+
+ scanout = &g->parent_obj.scanout[ss.scanout_id];
+ g->parent_obj.enable = 1;
+
+ if (ss.resource_id == 0) {
+ return;
+ }
+
+ res = virtio_gpu_find_resource(g, ss.resource_id);
+ CHECK(res, cmd);
+
+ if (!res->image) {
+ pixman_format_code_t pformat;
+ pformat = virtio_gpu_get_pixman_format(res->format);
+ CHECK(pformat, cmd);
+
+ res->image = pixman_image_create_bits(pformat,
+ res->width,
+ res->height,
+ NULL, 0);
+ CHECK(res->image, cmd);
+ pixman_image_ref(res->image);
+ }
+
+ /* realloc the surface ptr */
+ scanout->ds = qemu_create_displaysurface_pixman(res->image);
+ dpy_gfx_replace_surface(scanout->con, NULL);
+ dpy_gfx_replace_surface(scanout->con, scanout->ds);
+ res->scanout_bitmask = ss.scanout_id;
+}
+
+static void
+rutabaga_cmd_submit_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_cmd_submit cs;
+ g_autofree uint8_t *buf = NULL;
+ size_t s;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cs);
+ trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
+
+ buf = g_new0(uint8_t, cs.size);
+ s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
+ sizeof(cs), buf, cs.size);
+ CHECK((s == cs.size), cmd);
+
+ result = rutabaga_submit_command(vr->rutabaga, cs.hdr.ctx_id, buf, cs.size);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_transfer transfer = { 0 };
+ struct virtio_gpu_transfer_to_host_2d t2d;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(t2d);
+ trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
+
+ transfer.x = t2d.r.x;
+ transfer.y = t2d.r.y;
+ transfer.z = 0;
+ transfer.w = t2d.r.width;
+ transfer.h = t2d.r.height;
+ transfer.d = 1;
+
+ result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
+ &transfer);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_transfer transfer = { 0 };
+ struct virtio_gpu_transfer_host_3d t3d;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(t3d);
+ trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
+
+ transfer.x = t3d.box.x;
+ transfer.y = t3d.box.y;
+ transfer.z = t3d.box.z;
+ transfer.w = t3d.box.w;
+ transfer.h = t3d.box.h;
+ transfer.d = t3d.box.d;
+ transfer.level = t3d.level;
+ transfer.stride = t3d.stride;
+ transfer.layer_stride = t3d.layer_stride;
+ transfer.offset = t3d.offset;
+
+ result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
+ t3d.resource_id, &transfer);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_transfer transfer = { 0 };
+ struct virtio_gpu_transfer_host_3d t3d;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(t3d);
+ trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
+
+ transfer.x = t3d.box.x;
+ transfer.y = t3d.box.y;
+ transfer.z = t3d.box.z;
+ transfer.w = t3d.box.w;
+ transfer.h = t3d.box.h;
+ transfer.d = t3d.box.d;
+ transfer.level = t3d.level;
+ transfer.stride = t3d.stride;
+ transfer.layer_stride = t3d.layer_stride;
+ transfer.offset = t3d.offset;
+
+ result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
+ t3d.resource_id, &transfer, NULL);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ struct rutabaga_iovecs vecs = { 0 };
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_attach_backing att_rb;
+ struct iovec *res_iovs;
+ uint32_t res_niov;
+ int ret;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(att_rb);
+ trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
+
+ res = virtio_gpu_find_resource(g, att_rb.resource_id);
+ CHECK(res, cmd);
+ CHECK(!res->iov, cmd);
+
+ ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
+ cmd, NULL, &res_iovs, &res_niov);
+ CHECK_RESULT(ret, cmd);
+
+ vecs.iovecs = res_iovs;
+ vecs.num_iovecs = res_niov;
+
+ ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
+ &vecs);
+ if (ret != 0) {
+ virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
+ }
+}
+
+static void
+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_detach_backing detach_rb;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(detach_rb);
+ trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
+
+ res = virtio_gpu_find_resource(g, detach_rb.resource_id);
+ CHECK(res, cmd);
+
+ rutabaga_resource_detach_backing(vr->rutabaga,
+ detach_rb.resource_id);
+
+ virtio_gpu_cleanup_mapping(g, res);
+}
+
+static void
+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_resource att_res;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(att_res);
+ trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
+ att_res.resource_id);
+
+ result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
+ att_res.resource_id);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_resource det_res;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(det_res);
+ trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
+ det_res.resource_id);
+
+ result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
+ det_res.resource_id);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_get_capset_info info;
+ struct virtio_gpu_resp_capset_info resp;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(info);
+
+ result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
+ &resp.capset_id, &resp.capset_max_version,
+ &resp.capset_max_size);
+ CHECK_RESULT(result, cmd);
+
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_get_capset gc;
+ struct virtio_gpu_resp_capset *resp;
+ uint32_t capset_size;
+ uint32_t current_id;
+ bool found = false;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(gc);
+ for (uint32_t i = 0; i < vr->num_capsets; i++) {
+ result = rutabaga_get_capset_info(vr->rutabaga, i,
+ ¤t_id, &capset_size,
+ &capset_size);
+ CHECK_RESULT(result, cmd);
+
+ if (current_id == gc.capset_id) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found) {
+ error_report("capset not found!");
+ return;
+ }
+
+ resp = g_malloc0(sizeof(*resp) + capset_size);
+ resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
+ rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
+ (uint8_t *)resp->capset_data, capset_size);
+
+ virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
+ g_free(resp);
+}
+
+static void
+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int result;
+ struct rutabaga_iovecs vecs = { 0 };
+ g_autofree struct virtio_gpu_simple_resource *res = NULL;
+ struct virtio_gpu_simple_resource *resource;
+ struct virtio_gpu_resource_create_blob cblob;
+ struct rutabaga_create_blob rc_blob = { 0 };
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cblob);
+ trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+ CHECK(cblob.resource_id != 0, cmd);
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+
+ res->resource_id = cblob.resource_id;
+ res->blob_size = cblob.size;
+
+ if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+ result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
+ sizeof(cblob), cmd, &res->addrs,
+ &res->iov, &res->iov_cnt);
+ CHECK_RESULT(result, cmd);
+ }
+
+ rc_blob.blob_id = cblob.blob_id;
+ rc_blob.blob_mem = cblob.blob_mem;
+ rc_blob.blob_flags = cblob.blob_flags;
+ rc_blob.size = cblob.size;
+
+ vecs.iovecs = res->iov;
+ vecs.num_iovecs = res->iov_cnt;
+
+ result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
+ cblob.resource_id, &rc_blob, &vecs,
+ NULL);
+ CHECK_RESULT(result, cmd);
+ resource = g_steal_pointer(&res);
+ QTAILQ_INSERT_HEAD(&g->reslist, resource, next);
+}
+
+static void
+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ uint32_t slot = 0;
+ struct virtio_gpu_simple_resource *res;
+ struct rutabaga_mapping mapping = { 0 };
+ struct virtio_gpu_resource_map_blob mblob;
+ struct virtio_gpu_resp_map_info resp;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(mblob);
+
+ CHECK(mblob.resource_id != 0, cmd);
+
+ res = virtio_gpu_find_resource(g, mblob.resource_id);
+ CHECK(res, cmd);
+
+ result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
+ CHECK_RESULT(result, cmd);
+
+ for (slot = 0; slot < MAX_SLOTS; slot++) {
+ if (memory_regions[slot].used) {
+ continue;
+ }
+
+ MemoryRegion *mr = &(memory_regions[slot].mr);
+ memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
+ (void *)mapping.ptr);
+ memory_region_add_subregion(&g->parent_obj.hostmem,
+ mblob.offset, mr);
+ memory_regions[slot].resource_id = mblob.resource_id;
+ memory_regions[slot].used = 1;
+ break;
+ }
+
+ CHECK((slot < MAX_SLOTS), cmd);
+
+ memset(&resp, 0, sizeof(resp));
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+ result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
+ &resp.map_info);
+
+ CHECK_RESULT(result, cmd);
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ uint32_t slot = 0;
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_unmap_blob ublob;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(ublob);
+
+ CHECK(ublob.resource_id != 0, cmd);
+
+ res = virtio_gpu_find_resource(g, ublob.resource_id);
+ CHECK(res, cmd);
+
+ for (slot = 0; slot < MAX_SLOTS; slot++) {
+ if (memory_regions[slot].resource_id != ublob.resource_id) {
+ continue;
+ }
+
+ MemoryRegion *mr = &(memory_regions[slot].mr);
+ memory_region_del_subregion(&g->parent_obj.hostmem, mr);
+
+ memory_regions[slot].resource_id = 0;
+ memory_regions[slot].used = 0;
+ break;
+ }
+
+ CHECK((slot < MAX_SLOTS), cmd);
+ result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct rutabaga_fence fence = { 0 };
+ int32_t result;
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
+
+ switch (cmd->cmd_hdr.type) {
+ case VIRTIO_GPU_CMD_CTX_CREATE:
+ rutabaga_cmd_context_create(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_CTX_DESTROY:
+ rutabaga_cmd_context_destroy(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
+ rutabaga_cmd_create_resource_2d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
+ rutabaga_cmd_create_resource_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_SUBMIT_3D:
+ rutabaga_cmd_submit_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
+ rutabaga_cmd_transfer_to_host_2d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
+ rutabaga_cmd_transfer_to_host_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
+ rutabaga_cmd_transfer_from_host_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
+ rutabaga_cmd_attach_backing(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
+ rutabaga_cmd_detach_backing(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_SET_SCANOUT:
+ rutabaga_cmd_set_scanout(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
+ rutabaga_cmd_resource_flush(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_UNREF:
+ rutabaga_cmd_resource_unref(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
+ rutabaga_cmd_ctx_attach_resource(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
+ rutabaga_cmd_ctx_detach_resource(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
+ rutabaga_cmd_get_capset_info(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_CAPSET:
+ rutabaga_cmd_get_capset(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
+ virtio_gpu_get_display_info(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_EDID:
+ virtio_gpu_get_edid(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+ rutabaga_cmd_resource_create_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+ rutabaga_cmd_resource_map_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+ rutabaga_cmd_resource_unmap_blob(g, cmd);
+ break;
+ default:
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ break;
+ }
+
+ if (cmd->finished) {
+ return;
+ }
+ if (cmd->error) {
+ error_report("%s: ctrl 0x%x, error 0x%x", __func__,
+ cmd->cmd_hdr.type, cmd->error);
+ virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
+ return;
+ }
+ if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
+ virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+ return;
+ }
+
+ fence.flags = cmd->cmd_hdr.flags;
+ fence.ctx_id = cmd->cmd_hdr.ctx_id;
+ fence.fence_id = cmd->cmd_hdr.fence_id;
+ fence.ring_idx = cmd->cmd_hdr.ring_idx;
+
+ trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
+
+ result = rutabaga_create_fence(vr->rutabaga, &fence);
+ CHECK_RESULT(result, cmd);
+}
+
+static void
+virtio_gpu_rutabaga_aio_cb(void *opaque)
+{
+ struct rutabaga_aio_data *data = (struct rutabaga_aio_data *)opaque;
+ VirtIOGPU *g = (VirtIOGPU *)data->vr;
+ struct rutabaga_fence fence_data = data->fence;
+ struct virtio_gpu_ctrl_command *cmd, *tmp;
+
+ uint32_t signaled_ctx_specific = fence_data.flags &
+ RUTABAGA_FLAG_INFO_RING_IDX;
+
+ QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
+ /*
+ * Due to context specific timelines.
+ */
+ uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
+ RUTABAGA_FLAG_INFO_RING_IDX;
+
+ if (signaled_ctx_specific != target_ctx_specific) {
+ continue;
+ }
+
+ if (signaled_ctx_specific &&
+ (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
+ continue;
+ }
+
+ if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
+ continue;
+ }
+
+ trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
+ virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+ QTAILQ_REMOVE(&g->fenceq, cmd, next);
+ g_free(cmd);
+ }
+
+ g_free(data);
+}
+
+static void
+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
+ struct rutabaga_fence fence_data) {
+ struct rutabaga_aio_data *data;
+ VirtIOGPU *g = (VirtIOGPU *)user_data;
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ /*
+ * gfxstream and both cross-domain (and even newer versions virglrenderer:
+ * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
+ * threads ("callback threads") that are different from the thread that
+ * processes the command queue ("main thread").
+ *
+ * crosvm and other virtio-gpu 1.1implementations enable callback threads
+ * via locking. However, on QEMU a deadlock is observed if
+ * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
+ * from a thread that is not the main thread.
+ *
+ * The reason is QEMU's internal locking is designed to work with QEMU
+ * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
+ * For now, we can workaround this by scheduling the return of the
+ * fence descriptors on the main thread.
+ */
+
+ data = g_new0(struct rutabaga_aio_data, 1);
+ data->vr = vr;
+ data->fence = fence_data;
+ aio_bh_schedule_oneshot_full(vr->ctx, virtio_gpu_rutabaga_aio_cb,
+ (void *)data, "aio");
+}
+
+static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
+{
+ int result;
+ uint64_t capset_mask;
+ struct rutabaga_channels channels = { 0 };
+ struct rutabaga_builder builder = { 0 };
+
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ vr->rutabaga = NULL;
+
+ if (!vr->capset_names) {
+ return -EINVAL;
+ }
+
+ builder.wsi = RUTABAGA_WSI_SURFACELESS;
+ /*
+ * Currently, if WSI is specified, the only valid strings are "surfaceless"
+ * or "headless". Surfaceless doesn't create a native window surface, but
+ * does copy from the render target to the Pixman buffer if a virtio-gpu
+ * 2D hypercall is issued. Surfacless is the default.
+ *
+ * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
+ * use case is automated testing environments where there is no need to view
+ * results.
+ *
+ * In the future, more performant virtio-gpu 2D UI integration may be added.
+ */
+ if (vr->wsi) {
+ if (!strcmp(vr->wsi, "surfaceless")) {
+ vr->headless = false;
+ } else if (strcmp(vr->wsi, "headless")) {
+ vr->headless = true;
+ } else {
+ return -EINVAL;
+ }
+ }
+
+ result = rutabaga_calculate_capset_mask(vr->capset_names, &capset_mask);
+ if (result) {
+ return result;
+ }
+
+ /*
+ * rutabaga-0.1.1 is only compiled/tested with gfxstream and cross-domain
+ * support. Future versions may change this to have more context types if
+ * there is any interest.
+ */
+ if (capset_mask & (BIT(RUTABAGA_CAPSET_VIRGL) |
+ BIT(RUTABAGA_CAPSET_VIRGL2) |
+ BIT(RUTABAGA_CAPSET_VENUS) |
+ BIT(RUTABAGA_CAPSET_DRM))) {
+ return -EINVAL;
+ }
+
+ builder.user_data = (uint64_t)(uintptr_t *)(void *)g;
+ builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
+ builder.capset_mask = capset_mask;
+
+ if (vr->wayland_socket_path) {
+ if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) == 0) {
+ return -EINVAL;
+ }
+
+ channels.channels =
+ (struct rutabaga_channel *)calloc(1, sizeof(struct rutabaga_channel));
+ channels.num_channels = 1;
+ channels.channels[0].channel_name = vr->wayland_socket_path;
+ channels.channels[0].channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
+ builder.channels = &channels;
+ }
+
+ result = rutabaga_init(&builder, &vr->rutabaga);
+ if (builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) {
+ free(channels.channels);
+ }
+
+ memset(&memory_regions, 0, MAX_SLOTS * sizeof(struct MemoryRegionInfo));
+ vr->ctx = qemu_get_aio_context();
+ return result;
+}
+
+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
+{
+ int result;
+ uint32_t num_capsets;
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ if (!vr->rutabaga_active) {
+ result = virtio_gpu_rutabaga_init(g);
+ if (result) {
+ error_report("Failed to init rutabaga");
+ return 0;
+ }
+
+ vr->rutabaga_active = true;
+ }
+
+ result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
+ if (result) {
+ error_report("Failed to get capsets");
+ return 0;
+ }
+ vr->num_capsets = num_capsets;
+ return num_capsets;
+}
+
+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtIOGPU *g = VIRTIO_GPU(vdev);
+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ struct virtio_gpu_ctrl_command *cmd;
+
+ if (!virtio_queue_ready(vq)) {
+ return;
+ }
+
+ if (!vr->rutabaga_active) {
+ int result = virtio_gpu_rutabaga_init(g);
+ if (!result) {
+ vr->rutabaga_active = true;
+ }
+ }
+
+ if (!vr->rutabaga_active) {
+ return;
+ }
+
+ cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
+ while (cmd) {
+ cmd->vq = vq;
+ cmd->error = 0;
+ cmd->finished = false;
+ QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
+ cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
+ }
+
+ virtio_gpu_process_cmdq(g);
+}
+
+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
+{
+ int num_capsets;
+ VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
+ VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
+
+ num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
+ if (!num_capsets) {
+ return;
+ }
+
+#if HOST_BIG_ENDIAN
+ error_setg(errp, "rutabaga is not supported on bigendian platforms");
+ return;
+#endif
+
+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
+
+ bdev->virtio_config.num_capsets = num_capsets;
+ virtio_gpu_device_realize(qdev, errp);
+}
+
+static Property virtio_gpu_rutabaga_properties[] = {
+ DEFINE_PROP_STRING("capset_names", VirtioGpuRutabaga, capset_names),
+ DEFINE_PROP_STRING("wayland_socket_path", VirtioGpuRutabaga,
+ wayland_socket_path),
+ DEFINE_PROP_STRING("wsi", VirtioGpuRutabaga, wsi),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+ VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
+ VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
+
+ vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
+ vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
+ vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
+ vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
+
+ vdc->realize = virtio_gpu_rutabaga_realize;
+ device_class_set_props(dc, virtio_gpu_rutabaga_properties);
+}
+
+static const TypeInfo virtio_gpu_rutabaga_info = {
+ .name = TYPE_VIRTIO_GPU_RUTABAGA,
+ .parent = TYPE_VIRTIO_GPU,
+ .instance_size = sizeof(VirtioGpuRutabaga),
+ .class_init = virtio_gpu_rutabaga_class_init,
+};
+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
+module_kconfig(VIRTIO_GPU);
+
+static void virtio_register_types(void)
+{
+ type_register_static(&virtio_gpu_rutabaga_info);
+}
+
+type_init(virtio_register_types)
+
+module_dep("hw-display-virtio-gpu");
diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
new file mode 100644
index 0000000000..01831bd03f
--- /dev/null
+++ b/hw/display/virtio-vga-rutabaga.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "qemu/osdep.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/display/vga.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "virtio-vga.h"
+#include "qom/object.h"
+
+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
+
+typedef struct VirtIOVGARUTABAGA VirtIOVGARUTABAGA;
+DECLARE_INSTANCE_CHECKER(VirtIOVGARUTABAGA, VIRTIO_VGA_RUTABAGA,
+ TYPE_VIRTIO_VGA_RUTABAGA)
+
+struct VirtIOVGARUTABAGA {
+ VirtIOVGABase parent_obj;
+
+ VirtioGpuRutabaga vdev;
+};
+
+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
+{
+ VirtIOVGARUTABAGA *dev = VIRTIO_VGA_RUTABAGA(obj);
+
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+ TYPE_VIRTIO_GPU_RUTABAGA);
+ VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
+ .generic_name = TYPE_VIRTIO_VGA_RUTABAGA,
+ .parent = TYPE_VIRTIO_VGA_BASE,
+ .instance_size = sizeof(VirtIOVGARUTABAGA),
+ .instance_init = virtio_vga_rutabaga_inst_initfn,
+};
+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
+module_kconfig(VIRTIO_VGA);
+
+static void virtio_vga_register_types(void)
+{
+ if (have_vga) {
+ virtio_pci_types_register(&virtio_vga_rutabaga_info);
+ }
+}
+
+type_init(virtio_vga_register_types)
+
+module_dep("hw-display-virtio-vga");
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 7/9] gfxstream + rutabaga: meson support
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (5 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 8/9] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
` (2 subsequent siblings)
9 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
- Add meson detection of rutabaga_gfx
- Build virtio-gpu-rutabaga.c + associated vga/pci files when
present
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
---
hw/display/meson.build | 22 ++++++++++++++++++++++
meson.build | 7 +++++++
meson_options.txt | 2 ++
scripts/meson-buildoptions.sh | 3 +++
4 files changed, 34 insertions(+)
diff --git a/hw/display/meson.build b/hw/display/meson.build
index 413ba4ab24..10e41e4eef 100644
--- a/hw/display/meson.build
+++ b/hw/display/meson.build
@@ -79,6 +79,13 @@ if config_all_devices.has_key('CONFIG_VIRTIO_GPU')
if_true: [files('virtio-gpu-gl.c', 'virtio-gpu-virgl.c'), pixman, virgl])
hw_display_modules += {'virtio-gpu-gl': virtio_gpu_gl_ss}
endif
+
+ if rutabaga.found()
+ virtio_gpu_rutabaga_ss = ss.source_set()
+ virtio_gpu_rutabaga_ss.add(when: ['CONFIG_VIRTIO_GPU', rutabaga],
+ if_true: [files('virtio-gpu-rutabaga.c'), pixman])
+ hw_display_modules += {'virtio-gpu-rutabaga': virtio_gpu_rutabaga_ss}
+ endif
endif
if config_all_devices.has_key('CONFIG_VIRTIO_PCI')
@@ -95,6 +102,12 @@ if config_all_devices.has_key('CONFIG_VIRTIO_PCI')
if_true: [files('virtio-gpu-pci-gl.c'), pixman])
hw_display_modules += {'virtio-gpu-pci-gl': virtio_gpu_pci_gl_ss}
endif
+ if rutabaga.found()
+ virtio_gpu_pci_rutabaga_ss = ss.source_set()
+ virtio_gpu_pci_rutabaga_ss.add(when: ['CONFIG_VIRTIO_GPU', 'CONFIG_VIRTIO_PCI', rutabaga],
+ if_true: [files('virtio-gpu-pci-rutabaga.c'), pixman])
+ hw_display_modules += {'virtio-gpu-pci-rutabaga': virtio_gpu_pci_rutabaga_ss}
+ endif
endif
if config_all_devices.has_key('CONFIG_VIRTIO_VGA')
@@ -113,6 +126,15 @@ if config_all_devices.has_key('CONFIG_VIRTIO_VGA')
virtio_vga_gl_ss.add(when: 'CONFIG_ACPI', if_true: files('acpi-vga.c'),
if_false: files('acpi-vga-stub.c'))
hw_display_modules += {'virtio-vga-gl': virtio_vga_gl_ss}
+
+ if rutabaga.found()
+ virtio_vga_rutabaga_ss = ss.source_set()
+ virtio_vga_rutabaga_ss.add(when: ['CONFIG_VIRTIO_VGA', rutabaga],
+ if_true: [files('virtio-vga-rutabaga.c'), pixman])
+ virtio_vga_rutabaga_ss.add(when: 'CONFIG_ACPI', if_true: files('acpi-vga.c'),
+ if_false: files('acpi-vga-stub.c'))
+ hw_display_modules += {'virtio-vga-rutabaga': virtio_vga_rutabaga_ss}
+ endif
endif
system_ss.add(when: 'CONFIG_OMAP', if_true: files('omap_lcdc.c'))
diff --git a/meson.build b/meson.build
index 5fcdb37a71..c7b4811220 100644
--- a/meson.build
+++ b/meson.build
@@ -1069,6 +1069,12 @@ if not get_option('virglrenderer').auto() or have_system or have_vhost_user_gpu
dependencies: virgl))
endif
endif
+rutabaga = not_found
+if not get_option('rutabaga_gfx').auto() or have_system or have_vhost_user_gpu
+ rutabaga = dependency('rutabaga_gfx_ffi',
+ method: 'pkg-config',
+ required: get_option('rutabaga_gfx'))
+endif
blkio = not_found
if not get_option('blkio').auto() or have_block
blkio = dependency('blkio',
@@ -4272,6 +4278,7 @@ summary_info += {'libtasn1': tasn1}
summary_info += {'PAM': pam}
summary_info += {'iconv support': iconv}
summary_info += {'virgl support': virgl}
+summary_info += {'rutabaga support': rutabaga}
summary_info += {'blkio support': blkio}
summary_info += {'curl support': curl}
summary_info += {'Multipath support': mpathpersist}
diff --git a/meson_options.txt b/meson_options.txt
index bbb5c7e886..d7998655a8 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -224,6 +224,8 @@ option('vmnet', type : 'feature', value : 'auto',
description: 'vmnet.framework network backend support')
option('virglrenderer', type : 'feature', value : 'auto',
description: 'virgl rendering support')
+option('rutabaga_gfx', type : 'feature', value : 'auto',
+ description: 'rutabaga_gfx support')
option('png', type : 'feature', value : 'auto',
description: 'PNG support with libpng')
option('vnc', type : 'feature', value : 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 7dd5709ef4..92af437271 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -154,6 +154,7 @@ meson_options_help() {
printf "%s\n" ' rbd Ceph block device driver'
printf "%s\n" ' rdma Enable RDMA-based migration'
printf "%s\n" ' replication replication support'
+ printf "%s\n" ' rutabaga-gfx rutabaga_gfx support'
printf "%s\n" ' sdl SDL user interface'
printf "%s\n" ' sdl-image SDL Image support for icons'
printf "%s\n" ' seccomp seccomp support'
@@ -419,6 +420,8 @@ _meson_option_parse() {
--disable-replication) printf "%s" -Dreplication=disabled ;;
--enable-rng-none) printf "%s" -Drng_none=true ;;
--disable-rng-none) printf "%s" -Drng_none=false ;;
+ --enable-rutabaga-gfx) printf "%s" -Drutabaga_gfx=enabled ;;
+ --disable-rutabaga-gfx) printf "%s" -Drutabaga_gfx=disabled ;;
--enable-safe-stack) printf "%s" -Dsafe_stack=true ;;
--disable-safe-stack) printf "%s" -Dsafe_stack=false ;;
--enable-sanitizers) printf "%s" -Dsanitizers=true ;;
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 8/9] gfxstream + rutabaga: enable rutabaga
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (6 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 7/9] gfxstream + rutabaga: meson support Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
2023-07-24 9:56 ` [PATCH v1 0/9] gfxstream + rutabaga_gfx Alyssa Ross
9 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
This change enables rutabaga to receive virtio-gpu-3d hypercalls
when it is active.
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
---
hw/display/virtio-gpu-base.c | 3 ++-
hw/display/virtio-gpu.c | 5 +++--
softmmu/qdev-monitor.c | 3 +++
softmmu/vl.c | 1 +
4 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 6c5f1f327f..7913d9b82e 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -206,7 +206,8 @@ virtio_gpu_base_get_features(VirtIODevice *vdev, uint64_t features,
{
VirtIOGPUBase *g = VIRTIO_GPU_BASE(vdev);
- if (virtio_gpu_virgl_enabled(g->conf)) {
+ if (virtio_gpu_virgl_enabled(g->conf) ||
+ virtio_gpu_rutabaga_enabled(g->conf)) {
features |= (1 << VIRTIO_GPU_F_VIRGL);
}
if (virtio_gpu_edid_enabled(g->conf)) {
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 32da46fefc..068f4aeb7e 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1374,8 +1374,9 @@ void virtio_gpu_device_realize(DeviceState *qdev, Error **errp)
VirtIOGPU *g = VIRTIO_GPU(qdev);
if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
- if (!virtio_gpu_have_udmabuf()) {
- error_setg(errp, "cannot enable blob resources without udmabuf");
+ if (!virtio_gpu_have_udmabuf() &&
+ !virtio_gpu_rutabaga_enabled(g->parent_obj.conf)) {
+ error_setg(errp, "need udmabuf or rutabaga for blob resources");
return;
}
diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
index 74f4e41338..246f0b051d 100644
--- a/softmmu/qdev-monitor.c
+++ b/softmmu/qdev-monitor.c
@@ -86,6 +86,9 @@ static const QDevAlias qdev_alias_table[] = {
{ "virtio-gpu-pci", "virtio-gpu", QEMU_ARCH_VIRTIO_PCI },
{ "virtio-gpu-gl-device", "virtio-gpu-gl", QEMU_ARCH_VIRTIO_MMIO },
{ "virtio-gpu-gl-pci", "virtio-gpu-gl", QEMU_ARCH_VIRTIO_PCI },
+ { "virtio-gpu-rutabaga-device", "virtio-gpu-rutabaga",
+ QEMU_ARCH_VIRTIO_MMIO },
+ { "virtio-gpu-rutabaga-pci", "virtio-gpu-rutabaga", QEMU_ARCH_VIRTIO_PCI },
{ "virtio-input-host-device", "virtio-input-host", QEMU_ARCH_VIRTIO_MMIO },
{ "virtio-input-host-ccw", "virtio-input-host", QEMU_ARCH_VIRTIO_CCW },
{ "virtio-input-host-pci", "virtio-input-host", QEMU_ARCH_VIRTIO_PCI },
diff --git a/softmmu/vl.c b/softmmu/vl.c
index b0b96f67fa..2f98eefdf3 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -216,6 +216,7 @@ static struct {
{ .driver = "ati-vga", .flag = &default_vga },
{ .driver = "vhost-user-vga", .flag = &default_vga },
{ .driver = "virtio-vga-gl", .flag = &default_vga },
+ { .driver = "virtio-vga-rutabaga", .flag = &default_vga },
};
static QemuOptsList qemu_rtc_opts = {
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (7 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 8/9] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
@ 2023-07-11 2:56 ` Gurchetan Singh
2023-07-12 21:40 ` Akihiko Odaki
2023-07-24 9:56 ` [PATCH v1 0/9] gfxstream + rutabaga_gfx Alyssa Ross
9 siblings, 1 reply; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-11 2:56 UTC (permalink / raw)
To: qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
This adds basic documentation for virtio-gpu.
Suggested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
---
docs/system/device-emulation.rst | 1 +
docs/system/devices/virtio-gpu.rst | 80 ++++++++++++++++++++++++++++++
2 files changed, 81 insertions(+)
create mode 100644 docs/system/devices/virtio-gpu.rst
diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 4491c4cbf7..1167f3a9f2 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -91,6 +91,7 @@ Emulated Devices
devices/nvme.rst
devices/usb.rst
devices/vhost-user.rst
+ devices/virtio-gpu.rst
devices/virtio-pmem.rst
devices/vhost-user-rng.rst
devices/canokey.rst
diff --git a/docs/system/devices/virtio-gpu.rst b/docs/system/devices/virtio-gpu.rst
new file mode 100644
index 0000000000..2426039540
--- /dev/null
+++ b/docs/system/devices/virtio-gpu.rst
@@ -0,0 +1,80 @@
+..
+ SPDX-License-Identifier: GPL-2.0
+
+virtio-gpu
+==========
+
+This document explains the setup and usage of the virtio-gpu device.
+The virtio-gpu device paravirtualizes the GPU and display controller.
+
+Linux kernel support
+--------------------
+
+virtio-gpu requires a guest Linux kernel built with the
+``CONFIG_DRM_VIRTIO_GPU`` option.
+
+QEMU virtio-gpu variants
+------------------------
+
+There are many virtio-gpu device variants, listed below:
+
+ * ``virtio-vga``
+ * ``virtio-gpu-pci``
+ * ``virtio-vga-gl``
+ * ``virtio-gpu-gl-pci``
+ * ``virtio-vga-rutabaga``
+ * ``virtio-gpu-rutabaga-pci``
+ * ``vhost-user-vga``
+ * ``vhost-user-gl-pci``
+
+QEMU provides a 2D virtio-gpu backend, and two accelerated backends:
+virglrenderer ('gl' device label) and rutabaga_gfx ('rutabaga' device
+label). There is also a vhost-user backend that runs the 2D device
+in a separate process. Each device type as VGA or PCI variant. This
+document uses the PCI variant in examples.
+
+virtio-gpu 2d
+-------------
+
+The default 2D mode uses a guest software renderer (llvmpipe, lavapipe,
+Swiftshader) to provide the OpenGL/Vulkan implementations.
+
+.. parsed-literal::
+ -device virtio-gpu-pci
+
+virtio-gpu virglrenderer
+------------------------
+
+When using virgl accelerated graphics mode, OpenGL API calls are translated
+into an intermediate representation (see `Gallium3D`_). The intermediate
+representation is communicated to the host and the `virglrenderer`_ library
+on the host translates the intermediate representation back to OpenGL API
+calls.
+
+.. parsed-literal::
+ -device virtio-gpu-gl-pci
+
+.. _Gallium3D: https://www.freedesktop.org/wiki/Software/gallium/
+.. _virglrenderer: https://gitlab.freedesktop.org/virgl/virglrenderer/
+
+virtio-gpu rutabaga
+-------------------
+
+virtio-gpu can also leverage `rutabaga_gfx`_ to provide `gfxstream`_ rendering
+and `Wayland display passthrough`_. With the gfxstream rendering mode, GLES
+and Vulkan calls are forwarded directly to the host with minimal modification.
+
+Please refer the `crosvm book`_ on how to setup the guest for Wayland
+passthrough (QEMU uses the same implementation).
+
+This device does require host blob support (``hostmem`` field below), but not
+all capsets (``capset_names`` below) have to enabled when starting the device.
+
+.. parsed-literal::
+ -device virtio-gpu-rutabaga-pci,capset_names=gfxstream-vulkan:cross-domain,\\
+ hostmem=8G,wayland_socket_path="$XDG_RUNTIME_DIR/$WAYLAND_DISPLAY"
+
+.. _rutabaga_gfx: https://github.com/google/crosvm/blob/main/rutabaga_gfx/ffi/src/include/rutabaga_gfx_ffi.h
+.. _gfxstream: https://android.googlesource.com/platform/hardware/google/gfxstream/
+.. _Wayland display passthrough: https://www.youtube.com/watch?v=OZJiHMtIQ2M
+.. _crosvm book: https://crosvm.dev/book/devices/wayland.html
--
2.41.0.255.g8b1d071c50-goog
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH v1 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options
2023-07-11 2:56 ` [PATCH v1 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
@ 2023-07-12 11:36 ` Akihiko Odaki
0 siblings, 0 replies; 22+ messages in thread
From: Akihiko Odaki @ 2023-07-12 11:36 UTC (permalink / raw)
To: Gurchetan Singh, qemu-devel
Cc: --cc=kraxel, marcandre.lureau, dmitry.osipenko, ray.huang,
alex.bennee, shentey
On 2023/07/11 11:56, Gurchetan Singh wrote:
> This modifies the common virtio-gpu.h file have the fields and
> defintions needed by gfxstream/rutabaga, by VirtioGpuRutabaga.
s/VirtioGpuRutabaga/VirtIOGPURutabaga/g since VirtIOGPU is spelled this
way everywhere else.
>
> - a colon separated list of capset names, defined in the virtio spec
> - a wayland socket path to enable guest Wayland passthrough
>
> The command to run these would be:
>
> -device virtio-vga-rutabaga,capset_names=gfxstream:cross-domain, \
> wayland_socket_path=/run/user/1000/wayland-0,hostmem=8G \
>
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> ---
> v2: void *rutabaga --> struct rutabaga *rutabaga (Akihiko)
> have a separate rutabaga device instead of using GL device (Bernard)
>
> include/hw/virtio/virtio-gpu.h | 21 +++++++++++++++++++++
> 1 file changed, 21 insertions(+)
>
> diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> index 5927ca1864..5a1b15ccb9 100644
> --- a/include/hw/virtio/virtio-gpu.h
> +++ b/include/hw/virtio/virtio-gpu.h
> @@ -38,6 +38,9 @@ OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPUGL, VIRTIO_GPU_GL)
> #define TYPE_VHOST_USER_GPU "vhost-user-gpu"
> OBJECT_DECLARE_SIMPLE_TYPE(VhostUserGPU, VHOST_USER_GPU)
>
> +#define TYPE_VIRTIO_GPU_RUTABAGA "virtio-gpu-rutabaga-device"
> +OBJECT_DECLARE_SIMPLE_TYPE(VirtioGpuRutabaga, VIRTIO_GPU_RUTABAGA)
> +
> struct virtio_gpu_simple_resource {
> uint32_t resource_id;
> uint32_t width;
> @@ -94,6 +97,7 @@ enum virtio_gpu_base_conf_flags {
> VIRTIO_GPU_FLAG_DMABUF_ENABLED,
> VIRTIO_GPU_FLAG_BLOB_ENABLED,
> VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
> + VIRTIO_GPU_FLAG_RUTABAGA_ENABLED,
> };
>
> #define virtio_gpu_virgl_enabled(_cfg) \
> @@ -108,6 +112,8 @@ enum virtio_gpu_base_conf_flags {
> (_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
> #define virtio_gpu_context_init_enabled(_cfg) \
> (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
> +#define virtio_gpu_rutabaga_enabled(_cfg) \
> + (_cfg.flags & (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED))
> #define virtio_gpu_hostmem_enabled(_cfg) \
> (_cfg.hostmem > 0)
>
> @@ -229,6 +235,21 @@ struct VhostUserGPU {
> bool backend_blocked;
> };
>
> +struct rutabaga;
> +
> +struct VirtioGpuRutabaga {
> + struct VirtIOGPU parent_obj;
> +
> + bool rutabaga_active;
> + char *capset_names;
> + char *wayland_socket_path;
> + char *wsi;
> + bool headless;
> + uint32_t num_capsets;
> + struct rutabaga *rutabaga;
> + AioContext *ctx;
> +};
> +
> #define VIRTIO_GPU_FILL_CMD(out) do { \
> size_t s; \
> s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, 0, \
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream
2023-07-11 2:56 ` [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
@ 2023-07-12 12:31 ` Akihiko Odaki
2023-07-12 19:14 ` Marc-André Lureau
2023-07-15 19:58 ` Bernhard Beschow
2 siblings, 0 replies; 22+ messages in thread
From: Akihiko Odaki @ 2023-07-12 12:31 UTC (permalink / raw)
To: Gurchetan Singh, qemu-devel
Cc: kraxel, marcandre.lureau, dmitry.osipenko, ray.huang, alex.bennee,
shentey
On 2023/07/11 11:56, Gurchetan Singh wrote:
> This adds initial support for gfxstream and cross-domain. Both
> features rely on virtio-gpu blob resources and context types, which
> are also implemented in this patch.
>
> gfxstream has a long and illustrious history in Android graphics
> paravirtualization. It has been powering graphics in the Android
> Studio Emulator for more than a decade, which is the main developer
> platform.
>
> Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
> The key design characteristic was a 1:1 threading model and
> auto-generation, which fit nicely with the OpenGLES spec. It also
> allowed easy layering with ANGLE on the host, which provides the GLES
> implementations on Windows or MacOS enviroments.
>
> gfxstream has traditionally been maintained by a single engineer, and
> between 2015 to 2021, the goldfish throne passed to Frank Yang.
> Historians often remark this glorious reign ("pax gfxstreama" is the
> academic term) was comparable to that of Augustus and the both Queen
> Elizabeths. Just to name a few accomplishments in a resplendent
> panoply: higher versions of GLES, address space graphics, snapshot
> support and CTS compliant Vulkan [b].
>
> One major drawback was the use of out-of-tree goldfish drivers.
> Android engineers didn't know much about DRM/KMS and especially TTM so
> a simple guest to host pipe was conceived.
>
> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
> the Mesa/virglrenderer communities. In 2018, the initial virtio-gpu
> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
> It was a symbol compatible replacement of virglrenderer [c] and named
> "AVDVirglrenderer". This implementation forms the basis of the
> current gfxstream host implementation still in use today.
>
> cross-domain support follows a similar arc. Originally conceived by
> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
> 2018, it initially relied on the downstream "virtio-wl" device.
>
> In 2020 and 2021, virtio-gpu was extended to include blob resources
> and multiple timelines by yours truly, features gfxstream/cross-domain
> both require to function correctly.
>
> Right now, we stand at the precipice of a truly fantastic possibility:
> the Android Emulator powered by upstream QEMU and upstream Linux
> kernel. gfxstream will then be packaged properfully, and app
> developers can even fix gfxstream bugs on their own if they encounter
> them.
>
> It's been quite the ride, my friends. Where will gfxstream head next,
> nobody really knows. I wouldn't be surprised if it's around for
> another decade, maintained by a new generation of Android graphics
> enthusiasts.
>
> Technical details:
> - Very simple initial display integration: just used Pixman
> - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
> calls
>
> [a] https://android-review.googlesource.com/c/platform/development/+/34470
> [b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
> [c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> ---
> v2: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
> - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
> - Used error_report(..)
> - Used g_autofree to fix leaks on error paths
> - Removed unnecessary casts
> - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>
> hw/display/virtio-gpu-pci-rutabaga.c | 48 ++
> hw/display/virtio-gpu-rutabaga.c | 1088 ++++++++++++++++++++++++++
> hw/display/virtio-vga-rutabaga.c | 52 ++
> 3 files changed, 1188 insertions(+)
> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> create mode 100644 hw/display/virtio-gpu-rutabaga.c
> create mode 100644 hw/display/virtio-vga-rutabaga.c
>
> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
> new file mode 100644
> index 0000000000..5765bef266
> --- /dev/null
> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
> @@ -0,0 +1,48 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "qemu/osdep.h"
> +#include "qapi/error.h"
> +#include "qemu/module.h"
> +#include "hw/pci/pci.h"
> +#include "hw/qdev-properties.h"
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/virtio-bus.h"
> +#include "hw/virtio/virtio-gpu-pci.h"
> +#include "qom/object.h"
> +
> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
> +typedef struct VirtIOGPURUTABAGAPCI VirtIOGPURUTABAGAPCI;
s/VirtIOGPURUTABAGAPCI/VirtIOGPURutabagaPCI/g
> +DECLARE_INSTANCE_CHECKER(VirtIOGPURUTABAGAPCI, VIRTIO_GPU_RUTABAGA_PCI,
> + TYPE_VIRTIO_GPU_RUTABAGA_PCI)
> +
> +struct VirtIOGPURUTABAGAPCI {
> + VirtIOGPUPCIBase parent_obj;
> + VirtioGpuRutabaga vdev;
> +};
> +
> +static void virtio_gpu_rutabaga_initfn(Object *obj)
> +{
> + VirtIOGPURUTABAGAPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
> +
> + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> + TYPE_VIRTIO_GPU_RUTABAGA);
> + VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> +}
> +
> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
> + .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
> + .parent = TYPE_VIRTIO_GPU_PCI_BASE,
> + .instance_size = sizeof(VirtIOGPURUTABAGAPCI),
> + .instance_init = virtio_gpu_rutabaga_initfn,
> +};
> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
> +module_kconfig(VIRTIO_PCI);
> +
> +static void virtio_gpu_rutabaga_pci_register_types(void)
> +{
> + virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
> +}
> +
> +type_init(virtio_gpu_rutabaga_pci_register_types)
> +
> +module_dep("hw-display-virtio-gpu-pci");
> diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
> new file mode 100644
> index 0000000000..b60a30a093
> --- /dev/null
> +++ b/hw/display/virtio-gpu-rutabaga.c
> @@ -0,0 +1,1088 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "qemu/osdep.h"
> +#include "qemu/error-report.h"
> +#include "qemu/iov.h"
> +#include "trace.h"
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/virtio-gpu.h"
> +#include "hw/virtio/virtio-gpu-pixman.h"
> +#include "hw/virtio/virtio-iommu.h"
> +
> +#include <glib/gmem.h>
> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
> +
> +#define CHECK(condition, cmd) \
> + do { \
> + if (!condition) { \
Please wrap the parameter with parentheses: if (!(condition))
It will break CHECK_RESULT() without them. Add parentheses to cmd too
just for safety.
> + error_report("CHECK failed in %s() %s:" "%d", __func__, \
> + __FILE__, __LINE__); \
> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; \
> + return; \
> + } \
> + } while (0)
> +
> +#define CHECK_RESULT(result, cmd) CHECK(result == 0, cmd)
CHECK() and CHECK_RESULT() are somewhat confusing though I do understand
the intention.
Instead of defining a dedicated macro, I think you can just write
CHECK(!result, cmd) everywhere.
> +
> +#define MAX_SLOTS 4096
> +
> +struct MemoryRegionInfo {
> + int used;
> + MemoryRegion mr;
> + uint32_t resource_id;
> +};
> +
> +static struct MemoryRegionInfo memory_regions[MAX_SLOTS];
I don't think it's OK to use a static variable, considering that there
can be multiple instances of this device.
> +
> +struct rutabaga_aio_data {
> + struct VirtioGpuRutabaga *vr;
> + struct rutabaga_fence fence;
> +};
> +
> +static void
> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
> + uint32_t resource_id)
> +{
> + struct virtio_gpu_simple_resource *res;
> + struct rutabaga_transfer transfer = { 0 };
> + struct iovec transfer_iovec;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + res = virtio_gpu_find_resource(g, resource_id);
> + if (!res) {
> + return;
> + }
> +
> + if (res->width != s->current_cursor->width ||
> + res->height != s->current_cursor->height) {
> + return;
> + }
> +
> + transfer.x = 0;
> + transfer.y = 0;
> + transfer.z = 0;
> + transfer.w = res->width;
> + transfer.h = res->height;
> + transfer.d = 1;
> +
> + transfer_iovec.iov_base = (void *)s->current_cursor->data;
> + transfer_iovec.iov_len = res->width * res->height * 4;
> +
> + rutabaga_resource_transfer_read(vr->rutabaga, 0,
> + resource_id, &transfer,
> + &transfer_iovec);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
> +{
> + VirtIOGPU *g = VIRTIO_GPU(b);
> + virtio_gpu_process_cmdq(g);
> +}
> +
> +static void
> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_create_3d rc_3d = { 0 };
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_create_2d c2d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(c2d);
> + trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> + c2d.width, c2d.height);
> +
> + rc_3d.target = 2;
> + rc_3d.format = c2d.format;
> + rc_3d.bind = (1 << 1);
> + rc_3d.width = c2d.width;
> + rc_3d.height = c2d.height;
> + rc_3d.depth = 1;
> + rc_3d.array_size = 1;
> + rc_3d.last_level = 0;
> + rc_3d.nr_samples = 0;
> + rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
> +
> + result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
> + CHECK_RESULT(result, cmd);
> +
> + res = g_new0(struct virtio_gpu_simple_resource, 1);
> + res->width = c2d.width;
> + res->height = c2d.height;
> + res->format = c2d.format;
> + res->resource_id = c2d.resource_id;
> +
> + QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> +}
> +
> +static void
> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_create_3d rc_3d = { 0 };
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_create_3d c3d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(c3d);
> +
> + trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> + c3d.width, c3d.height, c3d.depth);
> +
> + rc_3d.target = c3d.target;
> + rc_3d.format = c3d.format;
> + rc_3d.bind = c3d.bind;
> + rc_3d.width = c3d.width;
> + rc_3d.height = c3d.height;
> + rc_3d.depth = c3d.depth;
> + rc_3d.array_size = c3d.array_size;
> + rc_3d.last_level = c3d.last_level;
> + rc_3d.nr_samples = c3d.nr_samples;
> + rc_3d.flags = c3d.flags;
> +
> + result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
> + CHECK_RESULT(result, cmd);
> +
> + res = g_new0(struct virtio_gpu_simple_resource, 1);
> + res->width = c3d.width;
> + res->height = c3d.height;
> + res->format = c3d.format;
> + res->resource_id = c3d.resource_id;
> +
> + QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> +}
> +
> +static void
> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_unref unref;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(unref);
> +
> + trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> +
> + res = virtio_gpu_find_resource(g, unref.resource_id);
> + CHECK(res, cmd);
> +
> + result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
> + CHECK_RESULT(result, cmd);
> +
> + if (res->image) {
> + pixman_image_unref(res->image);
> + }
> +
> + QTAILQ_REMOVE(&g->reslist, res, next);
> + g_free(res);
> +}
> +
> +static void
> +rutabaga_cmd_context_create(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_create cc;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cc);
> + trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
> + cc.debug_name);
> +
> + result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
> + cc.context_init, cc.debug_name, cc.nlen);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_destroy cd;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cd);
> + trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
> +
> + result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result, i;
> + struct virtio_gpu_scanout *scanout = NULL;
> + struct virtio_gpu_simple_resource *res;
> + struct rutabaga_transfer transfer = { 0 };
> + struct iovec transfer_iovec;
> + struct virtio_gpu_resource_flush rf;
> + bool found = false;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + if (vr->headless) {
> + return;
> + }
> +
> + VIRTIO_GPU_FILL_CMD(rf);
> + trace_virtio_gpu_cmd_res_flush(rf.resource_id,
> + rf.r.width, rf.r.height, rf.r.x, rf.r.y);
> +
> + res = virtio_gpu_find_resource(g, rf.resource_id);
> + CHECK(res, cmd);
> +
> + for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
> + scanout = &g->parent_obj.scanout[i];
> + if (i == res->scanout_bitmask) {
> + found = true;
> + break;
> + }
> + }
> +
> + if (!found) {
> + return;
> + }
> +
> + transfer.x = 0;
> + transfer.y = 0;
> + transfer.z = 0;
> + transfer.w = res->width;
> + transfer.h = res->height;
> + transfer.d = 1;
> +
> + transfer_iovec.iov_base = (void *)pixman_image_get_data(res->image);
> + transfer_iovec.iov_len = res->width * res->height * 4;
> +
> + result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
> + rf.resource_id, &transfer,
> + &transfer_iovec);
> + CHECK_RESULT(result, cmd);
> + dpy_gfx_update_full(scanout->con);
> +}
> +
> +static void
> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_scanout *scanout = NULL;
> + struct virtio_gpu_set_scanout ss;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + if (vr->headless) {
> + return;
> + }
> +
> + VIRTIO_GPU_FILL_CMD(ss);
> + trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
> + ss.r.width, ss.r.height, ss.r.x, ss.r.y);
> +
> + scanout = &g->parent_obj.scanout[ss.scanout_id];
> + g->parent_obj.enable = 1;
I think it's safer to delay this assignment until all CHECK() are done.
> +
> + if (ss.resource_id == 0) {
> + return;
> + }
> +
> + res = virtio_gpu_find_resource(g, ss.resource_id);
> + CHECK(res, cmd);
> +
> + if (!res->image) {
> + pixman_format_code_t pformat;
> + pformat = virtio_gpu_get_pixman_format(res->format);
> + CHECK(pformat, cmd);
> +
> + res->image = pixman_image_create_bits(pformat,
> + res->width,
> + res->height,
> + NULL, 0);
> + CHECK(res->image, cmd);
> + pixman_image_ref(res->image);
> + }
> +
> + /* realloc the surface ptr */
> + scanout->ds = qemu_create_displaysurface_pixman(res->image);
> + dpy_gfx_replace_surface(scanout->con, NULL);
> + dpy_gfx_replace_surface(scanout->con, scanout->ds);
> + res->scanout_bitmask = ss.scanout_id;
> +}
> +
> +static void
> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_cmd_submit cs;
> + g_autofree uint8_t *buf = NULL;
> + size_t s;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cs);
> + trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
> +
> + buf = g_new0(uint8_t, cs.size);
> + s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
> + sizeof(cs), buf, cs.size);
> + CHECK((s == cs.size), cmd);
> +
> + result = rutabaga_submit_command(vr->rutabaga, cs.hdr.ctx_id, buf, cs.size);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_transfer transfer = { 0 };
> + struct virtio_gpu_transfer_to_host_2d t2d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(t2d);
> + trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
> +
> + transfer.x = t2d.r.x;
> + transfer.y = t2d.r.y;
> + transfer.z = 0;
> + transfer.w = t2d.r.width;
> + transfer.h = t2d.r.height;
> + transfer.d = 1;
> +
> + result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
> + &transfer);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_transfer transfer = { 0 };
> + struct virtio_gpu_transfer_host_3d t3d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(t3d);
> + trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
> +
> + transfer.x = t3d.box.x;
> + transfer.y = t3d.box.y;
> + transfer.z = t3d.box.z;
> + transfer.w = t3d.box.w;
> + transfer.h = t3d.box.h;
> + transfer.d = t3d.box.d;
> + transfer.level = t3d.level;
> + transfer.stride = t3d.stride;
> + transfer.layer_stride = t3d.layer_stride;
> + transfer.offset = t3d.offset;
> +
> + result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
> + t3d.resource_id, &transfer);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_transfer transfer = { 0 };
> + struct virtio_gpu_transfer_host_3d t3d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(t3d);
> + trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
> +
> + transfer.x = t3d.box.x;
> + transfer.y = t3d.box.y;
> + transfer.z = t3d.box.z;
> + transfer.w = t3d.box.w;
> + transfer.h = t3d.box.h;
> + transfer.d = t3d.box.d;
> + transfer.level = t3d.level;
> + transfer.stride = t3d.stride;
> + transfer.layer_stride = t3d.layer_stride;
> + transfer.offset = t3d.offset;
> +
> + result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
> + t3d.resource_id, &transfer, NULL);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct rutabaga_iovecs vecs = { 0 };
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_attach_backing att_rb;
> + struct iovec *res_iovs;
> + uint32_t res_niov;
> + int ret;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(att_rb);
> + trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
> +
> + res = virtio_gpu_find_resource(g, att_rb.resource_id);
> + CHECK(res, cmd);
> + CHECK(!res->iov, cmd);
> +
> + ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
> + cmd, NULL, &res_iovs, &res_niov);
> + CHECK_RESULT(ret, cmd);
> +
> + vecs.iovecs = res_iovs;
> + vecs.num_iovecs = res_niov;
> +
> + ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
> + &vecs);
> + if (ret != 0) {
> + virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
The error should be propagated.
> + }
> +}
> +
> +static void
> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_detach_backing detach_rb;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(detach_rb);
> + trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
> +
> + res = virtio_gpu_find_resource(g, detach_rb.resource_id);
> + CHECK(res, cmd);
> +
> + rutabaga_resource_detach_backing(vr->rutabaga,
> + detach_rb.resource_id);
> +
> + virtio_gpu_cleanup_mapping(g, res);
> +}
> +
> +static void
> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_resource att_res;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(att_res);
> + trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
> + att_res.resource_id);
> +
> + result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
> + att_res.resource_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_resource det_res;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(det_res);
> + trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
> + det_res.resource_id);
> +
> + result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
> + det_res.resource_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_get_capset_info info;
> + struct virtio_gpu_resp_capset_info resp;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(info);
> +
> + result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
> + &resp.capset_id, &resp.capset_max_version,
> + &resp.capset_max_size);
> + CHECK_RESULT(result, cmd);
> +
> + resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
> + virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> +}
> +
> +static void
> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_get_capset gc;
> + struct virtio_gpu_resp_capset *resp;
> + uint32_t capset_size;
> + uint32_t current_id;
> + bool found = false;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(gc);
> + for (uint32_t i = 0; i < vr->num_capsets; i++) {
> + result = rutabaga_get_capset_info(vr->rutabaga, i,
> + ¤t_id, &capset_size,
> + &capset_size);
> + CHECK_RESULT(result, cmd);
> +
> + if (current_id == gc.capset_id) {
> + found = true;
> + break;
> + }
> + }
> +
> + if (!found) {
> + error_report("capset not found!");
> + return;
> + }
> +
> + resp = g_malloc0(sizeof(*resp) + capset_size);
> + resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
> + rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
> + (uint8_t *)resp->capset_data, capset_size);
I think this cast is unnecessary.
> +
> + virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
> + g_free(resp);
> +}
> +
> +static void
> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int result;
> + struct rutabaga_iovecs vecs = { 0 };
> + g_autofree struct virtio_gpu_simple_resource *res = NULL;
> + struct virtio_gpu_simple_resource *resource;
> + struct virtio_gpu_resource_create_blob cblob;
> + struct rutabaga_create_blob rc_blob = { 0 };
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cblob);
> + trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> +
> + CHECK(cblob.resource_id != 0, cmd);
> +
> + res = g_new0(struct virtio_gpu_simple_resource, 1);
> +
> + res->resource_id = cblob.resource_id;
> + res->blob_size = cblob.size;
> +
> + if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> + result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
> + sizeof(cblob), cmd, &res->addrs,
> + &res->iov, &res->iov_cnt);
> + CHECK_RESULT(result, cmd);
> + }
> +
> + rc_blob.blob_id = cblob.blob_id;
> + rc_blob.blob_mem = cblob.blob_mem;
> + rc_blob.blob_flags = cblob.blob_flags;
> + rc_blob.size = cblob.size;
> +
> + vecs.iovecs = res->iov;
> + vecs.num_iovecs = res->iov_cnt;
> +
> + result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
> + cblob.resource_id, &rc_blob, &vecs,
> + NULL);
> + CHECK_RESULT(result, cmd);
> + resource = g_steal_pointer(&res);
> + QTAILQ_INSERT_HEAD(&g->reslist, resource, next);
> +}
> +
> +static void
> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + uint32_t slot = 0;
> + struct virtio_gpu_simple_resource *res;
> + struct rutabaga_mapping mapping = { 0 };
> + struct virtio_gpu_resource_map_blob mblob;
> + struct virtio_gpu_resp_map_info resp;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(mblob);
> +
> + CHECK(mblob.resource_id != 0, cmd);
> +
> + res = virtio_gpu_find_resource(g, mblob.resource_id);
> + CHECK(res, cmd);
> +
> + result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
> + CHECK_RESULT(result, cmd);
> +
> + for (slot = 0; slot < MAX_SLOTS; slot++) {
> + if (memory_regions[slot].used) {
> + continue;
> + }
> +
> + MemoryRegion *mr = &(memory_regions[slot].mr);
> + memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
> + (void *)mapping.ptr);
> + memory_region_add_subregion(&g->parent_obj.hostmem,
> + mblob.offset, mr);
> + memory_regions[slot].resource_id = mblob.resource_id;
> + memory_regions[slot].used = 1;
> + break;
> + }
> +
> + CHECK((slot < MAX_SLOTS), cmd);
> +
> + memset(&resp, 0, sizeof(resp));
> + resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
> + result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
> + &resp.map_info);
> +
> + CHECK_RESULT(result, cmd);
> + virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> +}
> +
> +static void
> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + uint32_t slot = 0;
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_unmap_blob ublob;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(ublob);
> +
> + CHECK(ublob.resource_id != 0, cmd);
> +
> + res = virtio_gpu_find_resource(g, ublob.resource_id);
> + CHECK(res, cmd);
> +
> + for (slot = 0; slot < MAX_SLOTS; slot++) {
> + if (memory_regions[slot].resource_id != ublob.resource_id) {
> + continue;
> + }
> +
> + MemoryRegion *mr = &(memory_regions[slot].mr);
> + memory_region_del_subregion(&g->parent_obj.hostmem, mr);
> +
> + memory_regions[slot].resource_id = 0;
> + memory_regions[slot].used = 0;
> + break;
> + }
> +
> + CHECK((slot < MAX_SLOTS), cmd);
> + result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct rutabaga_fence fence = { 0 };
> + int32_t result;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
> +
> + switch (cmd->cmd_hdr.type) {
> + case VIRTIO_GPU_CMD_CTX_CREATE:
> + rutabaga_cmd_context_create(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_CTX_DESTROY:
> + rutabaga_cmd_context_destroy(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
> + rutabaga_cmd_create_resource_2d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
> + rutabaga_cmd_create_resource_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_SUBMIT_3D:
> + rutabaga_cmd_submit_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
> + rutabaga_cmd_transfer_to_host_2d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
> + rutabaga_cmd_transfer_to_host_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
> + rutabaga_cmd_transfer_from_host_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
> + rutabaga_cmd_attach_backing(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
> + rutabaga_cmd_detach_backing(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_SET_SCANOUT:
> + rutabaga_cmd_set_scanout(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
> + rutabaga_cmd_resource_flush(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_UNREF:
> + rutabaga_cmd_resource_unref(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
> + rutabaga_cmd_ctx_attach_resource(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
> + rutabaga_cmd_ctx_detach_resource(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
> + rutabaga_cmd_get_capset_info(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_CAPSET:
> + rutabaga_cmd_get_capset(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
> + virtio_gpu_get_display_info(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_EDID:
> + virtio_gpu_get_edid(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
> + rutabaga_cmd_resource_create_blob(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
> + rutabaga_cmd_resource_map_blob(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
> + rutabaga_cmd_resource_unmap_blob(g, cmd);
> + break;
> + default:
> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> + break;
> + }
> +
> + if (cmd->finished) {
> + return;
> + }
> + if (cmd->error) {
> + error_report("%s: ctrl 0x%x, error 0x%x", __func__,
> + cmd->cmd_hdr.type, cmd->error);
> + virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
> + return;
> + }
> + if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
> + virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
> + return;
> + }
> +
> + fence.flags = cmd->cmd_hdr.flags;
> + fence.ctx_id = cmd->cmd_hdr.ctx_id;
> + fence.fence_id = cmd->cmd_hdr.fence_id;
> + fence.ring_idx = cmd->cmd_hdr.ring_idx;
> +
> + trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
> +
> + result = rutabaga_create_fence(vr->rutabaga, &fence);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_aio_cb(void *opaque)
> +{
> + struct rutabaga_aio_data *data = (struct rutabaga_aio_data *)opaque;
It's C; you don't need cast here.
> + VirtIOGPU *g = (VirtIOGPU *)data->vr;
Use: VIRTIO_GPU()
> + struct rutabaga_fence fence_data = data->fence;
> + struct virtio_gpu_ctrl_command *cmd, *tmp;
> +
> + uint32_t signaled_ctx_specific = fence_data.flags &
> + RUTABAGA_FLAG_INFO_RING_IDX;
> +
> + QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
> + /*
> + * Due to context specific timelines.
> + */
> + uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
> + RUTABAGA_FLAG_INFO_RING_IDX;
> +
> + if (signaled_ctx_specific != target_ctx_specific) {
> + continue;
> + }
> +
> + if (signaled_ctx_specific &&
> + (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
> + continue;
> + }
> +
> + if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
> + continue;
> + }
> +
> + trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
> + virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
> + QTAILQ_REMOVE(&g->fenceq, cmd, next);
> + g_free(cmd);
> + }
> +
> + g_free(data);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> + struct rutabaga_fence fence_data) {
> + struct rutabaga_aio_data *data;
> + VirtIOGPU *g = (VirtIOGPU *)user_data;
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + /*
> + * gfxstream and both cross-domain (and even newer versions virglrenderer:
> + * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
> + * threads ("callback threads") that are different from the thread that
> + * processes the command queue ("main thread").
> + *
> + * crosvm and other virtio-gpu 1.1implementations enable callback threads
Missing space between "1.1" and "implementations".
> + * via locking. However, on QEMU a deadlock is observed if
> + * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
> + * from a thread that is not the main thread.
> + *
> + * The reason is QEMU's internal locking is designed to work with QEMU
> + * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
> + * For now, we can workaround this by scheduling the return of the
> + * fence descriptors on the main thread.
> + */
> +
> + data = g_new0(struct rutabaga_aio_data, 1);
> + data->vr = vr;
> + data->fence = fence_data;
> + aio_bh_schedule_oneshot_full(vr->ctx, virtio_gpu_rutabaga_aio_cb,
> + (void *)data, "aio");
> +}
> +
> +static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
> +{
> + int result;
> + uint64_t capset_mask;
> + struct rutabaga_channels channels = { 0 };
> + struct rutabaga_builder builder = { 0 };
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + vr->rutabaga = NULL;
> +
> + if (!vr->capset_names) {
> + return -EINVAL;
> + }
> +
> + builder.wsi = RUTABAGA_WSI_SURFACELESS;
> + /*
> + * Currently, if WSI is specified, the only valid strings are "surfaceless"
> + * or "headless". Surfaceless doesn't create a native window surface, but
> + * does copy from the render target to the Pixman buffer if a virtio-gpu
> + * 2D hypercall is issued. Surfacless is the default.
> + *
> + * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
> + * use case is automated testing environments where there is no need to view
> + * results.
> + *
> + * In the future, more performant virtio-gpu 2D UI integration may be added.
> + */
> + if (vr->wsi) {
> + if (!strcmp(vr->wsi, "surfaceless")) {
> + vr->headless = false;
> + } else if (strcmp(vr->wsi, "headless")) {
> + vr->headless = true;
> + } else {
> + return -EINVAL;
> + }
> + }
> +
> + result = rutabaga_calculate_capset_mask(vr->capset_names, &capset_mask);
> + if (result) {
> + return result;
> + }
> +
> + /*
> + * rutabaga-0.1.1 is only compiled/tested with gfxstream and cross-domain
> + * support. Future versions may change this to have more context types if
> + * there is any interest.
> + */
> + if (capset_mask & (BIT(RUTABAGA_CAPSET_VIRGL) |
> + BIT(RUTABAGA_CAPSET_VIRGL2) |
> + BIT(RUTABAGA_CAPSET_VENUS) |
> + BIT(RUTABAGA_CAPSET_DRM))) {
> + return -EINVAL;
> + }
> +
> + builder.user_data = (uint64_t)(uintptr_t *)(void *)g;
> + builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
> + builder.capset_mask = capset_mask;
> +
> + if (vr->wayland_socket_path) {
> + if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) == 0) {
> + return -EINVAL;
> + }
> +
> + channels.channels =
> + (struct rutabaga_channel *)calloc(1, sizeof(struct rutabaga_channel));
> + channels.num_channels = 1;
> + channels.channels[0].channel_name = vr->wayland_socket_path;
> + channels.channels[0].channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
> + builder.channels = &channels;
> + }
> +
> + result = rutabaga_init(&builder, &vr->rutabaga);
> + if (builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) {
> + free(channels.channels);
> + }
> +
> + memset(&memory_regions, 0, MAX_SLOTS * sizeof(struct MemoryRegionInfo));
> + vr->ctx = qemu_get_aio_context();
You don't need to store the result of qemu_get_aio_context(); this
function is always available as far as I know.
> + return result;
> +}
> +
> +static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
> +{
> + int result;
> + uint32_t num_capsets;
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + if (!vr->rutabaga_active) {
> + result = virtio_gpu_rutabaga_init(g);
> + if (result) {
> + error_report("Failed to init rutabaga");
> + return 0;
> + }
> +
> + vr->rutabaga_active = true;
> + }
> +
> + result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
> + if (result) {
> + error_report("Failed to get capsets");
> + return 0;
> + }
> + vr->num_capsets = num_capsets;
> + return num_capsets;
> +}
> +
> +static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
> +{
> + VirtIOGPU *g = VIRTIO_GPU(vdev);
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + struct virtio_gpu_ctrl_command *cmd;
> +
> + if (!virtio_queue_ready(vq)) {
> + return;
> + }
> +
> + if (!vr->rutabaga_active) {
> + int result = virtio_gpu_rutabaga_init(g);
> + if (!result) {
> + vr->rutabaga_active = true;
> + }
> + }
> +
> + if (!vr->rutabaga_active) {
> + return;
> + }
> +
> + cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> + while (cmd) {
> + cmd->vq = vq;
> + cmd->error = 0;
> + cmd->finished = false;
> + QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
> + cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> + }
> +
> + virtio_gpu_process_cmdq(g);
> +}
> +
> +static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
> +{
> + int num_capsets;
> + VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
> + VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
> +
> + num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
> + if (!num_capsets) {
> + return;
> + }
> +
> +#if HOST_BIG_ENDIAN
> + error_setg(errp, "rutabaga is not supported on bigendian platforms");
> + return;
> +#endif
> +
> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
> +
> + bdev->virtio_config.num_capsets = num_capsets;
> + virtio_gpu_device_realize(qdev, errp);
> +}
> +
> +static Property virtio_gpu_rutabaga_properties[] = {
> + DEFINE_PROP_STRING("capset_names", VirtioGpuRutabaga, capset_names),
> + DEFINE_PROP_STRING("wayland_socket_path", VirtioGpuRutabaga,
> + wayland_socket_path),
> + DEFINE_PROP_STRING("wsi", VirtioGpuRutabaga, wsi),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> +static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
> +{
> + DeviceClass *dc = DEVICE_CLASS(klass);
> + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> + VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
> + VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
> +
> + vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
> + vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
> + vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
> + vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
> +
> + vdc->realize = virtio_gpu_rutabaga_realize;
> + device_class_set_props(dc, virtio_gpu_rutabaga_properties);
> +}
> +
> +static const TypeInfo virtio_gpu_rutabaga_info = {
> + .name = TYPE_VIRTIO_GPU_RUTABAGA,
> + .parent = TYPE_VIRTIO_GPU,
> + .instance_size = sizeof(VirtioGpuRutabaga),
> + .class_init = virtio_gpu_rutabaga_class_init,
> +};
> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
> +module_kconfig(VIRTIO_GPU);
> +
> +static void virtio_register_types(void)
> +{
> + type_register_static(&virtio_gpu_rutabaga_info);
> +}
> +
> +type_init(virtio_register_types)
> +
> +module_dep("hw-display-virtio-gpu");
> diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
> new file mode 100644
> index 0000000000..01831bd03f
> --- /dev/null
> +++ b/hw/display/virtio-vga-rutabaga.c
> @@ -0,0 +1,52 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "qemu/osdep.h"
> +#include "hw/pci/pci.h"
> +#include "hw/qdev-properties.h"
> +#include "hw/virtio/virtio-gpu.h"
> +#include "hw/display/vga.h"
> +#include "qapi/error.h"
> +#include "qemu/module.h"
> +#include "virtio-vga.h"
> +#include "qom/object.h"
> +
> +#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
> +
> +typedef struct VirtIOVGARUTABAGA VirtIOVGARUTABAGA;
> +DECLARE_INSTANCE_CHECKER(VirtIOVGARUTABAGA, VIRTIO_VGA_RUTABAGA,
> + TYPE_VIRTIO_VGA_RUTABAGA)
> +
> +struct VirtIOVGARUTABAGA {
> + VirtIOVGABase parent_obj;
> +
> + VirtioGpuRutabaga vdev;
> +};
> +
> +static void virtio_vga_rutabaga_inst_initfn(Object *obj)
> +{
> + VirtIOVGARUTABAGA *dev = VIRTIO_VGA_RUTABAGA(obj);
> +
> + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> + TYPE_VIRTIO_GPU_RUTABAGA);
> + VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> +}
> +
> +static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
> + .generic_name = TYPE_VIRTIO_VGA_RUTABAGA,
> + .parent = TYPE_VIRTIO_VGA_BASE,
> + .instance_size = sizeof(VirtIOVGARUTABAGA),
> + .instance_init = virtio_vga_rutabaga_inst_initfn,
> +};
> +module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
> +module_kconfig(VIRTIO_VGA);
> +
> +static void virtio_vga_register_types(void)
> +{
> + if (have_vga) {
> + virtio_pci_types_register(&virtio_vga_rutabaga_info);
> + }
> +}
> +
> +type_init(virtio_vga_register_types)
> +
> +module_dep("hw-display-virtio-vga");
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream
2023-07-11 2:56 ` [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
2023-07-12 12:31 ` Akihiko Odaki
@ 2023-07-12 19:14 ` Marc-André Lureau
2023-07-13 1:27 ` Gurchetan Singh
2023-07-15 19:58 ` Bernhard Beschow
2 siblings, 1 reply; 22+ messages in thread
From: Marc-André Lureau @ 2023-07-12 19:14 UTC (permalink / raw)
To: Gurchetan Singh
Cc: qemu-devel, akihiko.odaki, dmitry.osipenko, ray.huang,
alex.bennee, shentey, Gerd Hoffmann
[-- Attachment #1: Type: text/plain, Size: 46732 bytes --]
Hi
On Tue, Jul 11, 2023 at 6:57 AM Gurchetan Singh <gurchetansingh@chromium.org>
wrote:
> This adds initial support for gfxstream and cross-domain. Both
> features rely on virtio-gpu blob resources and context types, which
> are also implemented in this patch.
>
> gfxstream has a long and illustrious history in Android graphics
> paravirtualization. It has been powering graphics in the Android
> Studio Emulator for more than a decade, which is the main developer
> platform.
>
> Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
> The key design characteristic was a 1:1 threading model and
> auto-generation, which fit nicely with the OpenGLES spec. It also
> allowed easy layering with ANGLE on the host, which provides the GLES
> implementations on Windows or MacOS enviroments.
>
> gfxstream has traditionally been maintained by a single engineer, and
> between 2015 to 2021, the goldfish throne passed to Frank Yang.
> Historians often remark this glorious reign ("pax gfxstreama" is the
> academic term) was comparable to that of Augustus and the both Queen
> Elizabeths. Just to name a few accomplishments in a resplendent
> panoply: higher versions of GLES, address space graphics, snapshot
> support and CTS compliant Vulkan [b].
>
> One major drawback was the use of out-of-tree goldfish drivers.
> Android engineers didn't know much about DRM/KMS and especially TTM so
> a simple guest to host pipe was conceived.
>
> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
> the Mesa/virglrenderer communities. In 2018, the initial virtio-gpu
> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
> It was a symbol compatible replacement of virglrenderer [c] and named
> "AVDVirglrenderer". This implementation forms the basis of the
> current gfxstream host implementation still in use today.
>
> cross-domain support follows a similar arc. Originally conceived by
> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
> 2018, it initially relied on the downstream "virtio-wl" device.
>
> In 2020 and 2021, virtio-gpu was extended to include blob resources
> and multiple timelines by yours truly, features gfxstream/cross-domain
> both require to function correctly.
>
> Right now, we stand at the precipice of a truly fantastic possibility:
> the Android Emulator powered by upstream QEMU and upstream Linux
> kernel. gfxstream will then be packaged properfully, and app
> developers can even fix gfxstream bugs on their own if they encounter
> them.
>
> It's been quite the ride, my friends. Where will gfxstream head next,
> nobody really knows. I wouldn't be surprised if it's around for
> another decade, maintained by a new generation of Android graphics
> enthusiasts.
>
> Technical details:
> - Very simple initial display integration: just used Pixman
> - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
> calls
>
>
Wow, this is not for the faint reader.. there is a lot to grasp in this gfx
space...
Could you perhaps extend on what this current code can do for an average
Linux VM? or for some Android VM (which one?!), and then what are the next
steps and status?
My limited understanding (from this series and from
https://gitlab.com/qemu-project/qemu/-/issues/1611) is that it allows
passing-through some vulkan APIs for off-screen usage. Is that accurate?
How far are we from getting upstream QEMU to be used by Android Emulator?
(in the gfx domain at least) What would it take to get the average Linux VM
to use virtio-vga-rutabaga instead of virtio-vga-gl to get accelerated
rendering?
[a] https://android-review.googlesource.com/c/platform/development/+/34470
> [b]
> https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
> [c]
> https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> ---
> v2: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
> - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
> - Used error_report(..)
> - Used g_autofree to fix leaks on error paths
> - Removed unnecessary casts
> - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>
> hw/display/virtio-gpu-pci-rutabaga.c | 48 ++
> hw/display/virtio-gpu-rutabaga.c | 1088 ++++++++++++++++++++++++++
> hw/display/virtio-vga-rutabaga.c | 52 ++
> 3 files changed, 1188 insertions(+)
> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> create mode 100644 hw/display/virtio-gpu-rutabaga.c
> create mode 100644 hw/display/virtio-vga-rutabaga.c
>
> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c
> b/hw/display/virtio-gpu-pci-rutabaga.c
> new file mode 100644
> index 0000000000..5765bef266
> --- /dev/null
> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
> @@ -0,0 +1,48 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "qemu/osdep.h"
> +#include "qapi/error.h"
> +#include "qemu/module.h"
> +#include "hw/pci/pci.h"
> +#include "hw/qdev-properties.h"
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/virtio-bus.h"
> +#include "hw/virtio/virtio-gpu-pci.h"
> +#include "qom/object.h"
> +
> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
> +typedef struct VirtIOGPURUTABAGAPCI VirtIOGPURUTABAGAPCI;
> +DECLARE_INSTANCE_CHECKER(VirtIOGPURUTABAGAPCI, VIRTIO_GPU_RUTABAGA_PCI,
> + TYPE_VIRTIO_GPU_RUTABAGA_PCI)
> +
> +struct VirtIOGPURUTABAGAPCI {
> + VirtIOGPUPCIBase parent_obj;
> + VirtioGpuRutabaga vdev;
> +};
> +
> +static void virtio_gpu_rutabaga_initfn(Object *obj)
> +{
> + VirtIOGPURUTABAGAPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
> +
> + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> + TYPE_VIRTIO_GPU_RUTABAGA);
> + VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> +}
> +
> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
> + .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
> + .parent = TYPE_VIRTIO_GPU_PCI_BASE,
> + .instance_size = sizeof(VirtIOGPURUTABAGAPCI),
> + .instance_init = virtio_gpu_rutabaga_initfn,
> +};
> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
> +module_kconfig(VIRTIO_PCI);
> +
> +static void virtio_gpu_rutabaga_pci_register_types(void)
> +{
> + virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
> +}
> +
> +type_init(virtio_gpu_rutabaga_pci_register_types)
> +
> +module_dep("hw-display-virtio-gpu-pci");
> diff --git a/hw/display/virtio-gpu-rutabaga.c
> b/hw/display/virtio-gpu-rutabaga.c
> new file mode 100644
> index 0000000000..b60a30a093
> --- /dev/null
> +++ b/hw/display/virtio-gpu-rutabaga.c
> @@ -0,0 +1,1088 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "qemu/osdep.h"
> +#include "qemu/error-report.h"
> +#include "qemu/iov.h"
> +#include "trace.h"
> +#include "hw/virtio/virtio.h"
> +#include "hw/virtio/virtio-gpu.h"
> +#include "hw/virtio/virtio-gpu-pixman.h"
> +#include "hw/virtio/virtio-iommu.h"
> +
> +#include <glib/gmem.h>
> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
> +
> +#define CHECK(condition, cmd)
> \
> + do {
> \
> + if (!condition) {
> \
> + error_report("CHECK failed in %s() %s:" "%d", __func__,
> \
> + __FILE__, __LINE__);
> \
> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> \
> + return;
> \
> + }
> \
> + } while (0)
> +
> +#define CHECK_RESULT(result, cmd) CHECK(result == 0, cmd)
> +
> +#define MAX_SLOTS 4096
> +
> +struct MemoryRegionInfo {
> + int used;
> + MemoryRegion mr;
> + uint32_t resource_id;
> +};
> +
> +static struct MemoryRegionInfo memory_regions[MAX_SLOTS];
> +
> +struct rutabaga_aio_data {
> + struct VirtioGpuRutabaga *vr;
> + struct rutabaga_fence fence;
> +};
> +
> +static void
> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout
> *s,
> + uint32_t resource_id)
> +{
> + struct virtio_gpu_simple_resource *res;
> + struct rutabaga_transfer transfer = { 0 };
> + struct iovec transfer_iovec;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + res = virtio_gpu_find_resource(g, resource_id);
> + if (!res) {
> + return;
> + }
> +
> + if (res->width != s->current_cursor->width ||
> + res->height != s->current_cursor->height) {
> + return;
> + }
> +
> + transfer.x = 0;
> + transfer.y = 0;
> + transfer.z = 0;
> + transfer.w = res->width;
> + transfer.h = res->height;
> + transfer.d = 1;
> +
> + transfer_iovec.iov_base = (void *)s->current_cursor->data;
> + transfer_iovec.iov_len = res->width * res->height * 4;
> +
> + rutabaga_resource_transfer_read(vr->rutabaga, 0,
> + resource_id, &transfer,
> + &transfer_iovec);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
> +{
> + VirtIOGPU *g = VIRTIO_GPU(b);
> + virtio_gpu_process_cmdq(g);
> +}
> +
> +static void
> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_create_3d rc_3d = { 0 };
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_create_2d c2d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(c2d);
> + trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
> + c2d.width, c2d.height);
> +
> + rc_3d.target = 2;
> + rc_3d.format = c2d.format;
> + rc_3d.bind = (1 << 1);
> + rc_3d.width = c2d.width;
> + rc_3d.height = c2d.height;
> + rc_3d.depth = 1;
> + rc_3d.array_size = 1;
> + rc_3d.last_level = 0;
> + rc_3d.nr_samples = 0;
> + rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
> +
> + result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id,
> &rc_3d);
> + CHECK_RESULT(result, cmd);
> +
> + res = g_new0(struct virtio_gpu_simple_resource, 1);
> + res->width = c2d.width;
> + res->height = c2d.height;
> + res->format = c2d.format;
> + res->resource_id = c2d.resource_id;
> +
> + QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> +}
> +
> +static void
> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_create_3d rc_3d = { 0 };
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_create_3d c3d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(c3d);
> +
> + trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
> + c3d.width, c3d.height, c3d.depth);
> +
> + rc_3d.target = c3d.target;
> + rc_3d.format = c3d.format;
> + rc_3d.bind = c3d.bind;
> + rc_3d.width = c3d.width;
> + rc_3d.height = c3d.height;
> + rc_3d.depth = c3d.depth;
> + rc_3d.array_size = c3d.array_size;
> + rc_3d.last_level = c3d.last_level;
> + rc_3d.nr_samples = c3d.nr_samples;
> + rc_3d.flags = c3d.flags;
> +
> + result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id,
> &rc_3d);
> + CHECK_RESULT(result, cmd);
> +
> + res = g_new0(struct virtio_gpu_simple_resource, 1);
> + res->width = c3d.width;
> + res->height = c3d.height;
> + res->format = c3d.format;
> + res->resource_id = c3d.resource_id;
> +
> + QTAILQ_INSERT_HEAD(&g->reslist, res, next);
> +}
> +
> +static void
> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_unref unref;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(unref);
> +
> + trace_virtio_gpu_cmd_res_unref(unref.resource_id);
> +
> + res = virtio_gpu_find_resource(g, unref.resource_id);
> + CHECK(res, cmd);
> +
> + result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
> + CHECK_RESULT(result, cmd);
> +
> + if (res->image) {
> + pixman_image_unref(res->image);
> + }
> +
> + QTAILQ_REMOVE(&g->reslist, res, next);
> + g_free(res);
> +}
> +
> +static void
> +rutabaga_cmd_context_create(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_create cc;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cc);
> + trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
> + cc.debug_name);
> +
> + result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
> + cc.context_init, cc.debug_name,
> cc.nlen);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_destroy cd;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cd);
> + trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
> +
> + result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> +{
> + int32_t result, i;
> + struct virtio_gpu_scanout *scanout = NULL;
> + struct virtio_gpu_simple_resource *res;
> + struct rutabaga_transfer transfer = { 0 };
> + struct iovec transfer_iovec;
> + struct virtio_gpu_resource_flush rf;
> + bool found = false;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + if (vr->headless) {
> + return;
> + }
> +
> + VIRTIO_GPU_FILL_CMD(rf);
> + trace_virtio_gpu_cmd_res_flush(rf.resource_id,
> + rf.r.width, rf.r.height, rf.r.x,
> rf.r.y);
> +
> + res = virtio_gpu_find_resource(g, rf.resource_id);
> + CHECK(res, cmd);
> +
> + for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
> + scanout = &g->parent_obj.scanout[i];
> + if (i == res->scanout_bitmask) {
> + found = true;
> + break;
> + }
> + }
> +
> + if (!found) {
> + return;
> + }
> +
> + transfer.x = 0;
> + transfer.y = 0;
> + transfer.z = 0;
> + transfer.w = res->width;
> + transfer.h = res->height;
> + transfer.d = 1;
> +
> + transfer_iovec.iov_base = (void *)pixman_image_get_data(res->image);
> + transfer_iovec.iov_len = res->width * res->height * 4;
> +
> + result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
> + rf.resource_id, &transfer,
> + &transfer_iovec);
> + CHECK_RESULT(result, cmd);
> + dpy_gfx_update_full(scanout->con);
> +}
> +
> +static void
> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> +{
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_scanout *scanout = NULL;
> + struct virtio_gpu_set_scanout ss;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + if (vr->headless) {
> + return;
> + }
> +
> + VIRTIO_GPU_FILL_CMD(ss);
> + trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
> + ss.r.width, ss.r.height, ss.r.x,
> ss.r.y);
> +
> + scanout = &g->parent_obj.scanout[ss.scanout_id];
> + g->parent_obj.enable = 1;
> +
> + if (ss.resource_id == 0) {
> + return;
> + }
> +
> + res = virtio_gpu_find_resource(g, ss.resource_id);
> + CHECK(res, cmd);
> +
> + if (!res->image) {
> + pixman_format_code_t pformat;
> + pformat = virtio_gpu_get_pixman_format(res->format);
> + CHECK(pformat, cmd);
> +
> + res->image = pixman_image_create_bits(pformat,
> + res->width,
> + res->height,
> + NULL, 0);
> + CHECK(res->image, cmd);
> + pixman_image_ref(res->image);
> + }
> +
> + /* realloc the surface ptr */
> + scanout->ds = qemu_create_displaysurface_pixman(res->image);
> + dpy_gfx_replace_surface(scanout->con, NULL);
> + dpy_gfx_replace_surface(scanout->con, scanout->ds);
> + res->scanout_bitmask = ss.scanout_id;
> +}
> +
> +static void
> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_cmd_submit cs;
> + g_autofree uint8_t *buf = NULL;
> + size_t s;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cs);
> + trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
> +
> + buf = g_new0(uint8_t, cs.size);
> + s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
> + sizeof(cs), buf, cs.size);
> + CHECK((s == cs.size), cmd);
> +
> + result = rutabaga_submit_command(vr->rutabaga, cs.hdr.ctx_id, buf,
> cs.size);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_transfer transfer = { 0 };
> + struct virtio_gpu_transfer_to_host_2d t2d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(t2d);
> + trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
> +
> + transfer.x = t2d.r.x;
> + transfer.y = t2d.r.y;
> + transfer.z = 0;
> + transfer.w = t2d.r.width;
> + transfer.h = t2d.r.height;
> + transfer.d = 1;
> +
> + result = rutabaga_resource_transfer_write(vr->rutabaga, 0,
> t2d.resource_id,
> + &transfer);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_transfer transfer = { 0 };
> + struct virtio_gpu_transfer_host_3d t3d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(t3d);
> + trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
> +
> + transfer.x = t3d.box.x;
> + transfer.y = t3d.box.y;
> + transfer.z = t3d.box.z;
> + transfer.w = t3d.box.w;
> + transfer.h = t3d.box.h;
> + transfer.d = t3d.box.d;
> + transfer.level = t3d.level;
> + transfer.stride = t3d.stride;
> + transfer.layer_stride = t3d.layer_stride;
> + transfer.offset = t3d.offset;
> +
> + result = rutabaga_resource_transfer_write(vr->rutabaga,
> t3d.hdr.ctx_id,
> + t3d.resource_id, &transfer);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct rutabaga_transfer transfer = { 0 };
> + struct virtio_gpu_transfer_host_3d t3d;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(t3d);
> + trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
> +
> + transfer.x = t3d.box.x;
> + transfer.y = t3d.box.y;
> + transfer.z = t3d.box.z;
> + transfer.w = t3d.box.w;
> + transfer.h = t3d.box.h;
> + transfer.d = t3d.box.d;
> + transfer.level = t3d.level;
> + transfer.stride = t3d.stride;
> + transfer.layer_stride = t3d.layer_stride;
> + transfer.offset = t3d.offset;
> +
> + result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
> + t3d.resource_id, &transfer,
> NULL);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> +{
> + struct rutabaga_iovecs vecs = { 0 };
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_attach_backing att_rb;
> + struct iovec *res_iovs;
> + uint32_t res_niov;
> + int ret;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(att_rb);
> + trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
> +
> + res = virtio_gpu_find_resource(g, att_rb.resource_id);
> + CHECK(res, cmd);
> + CHECK(!res->iov, cmd);
> +
> + ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries,
> sizeof(att_rb),
> + cmd, NULL, &res_iovs, &res_niov);
> + CHECK_RESULT(ret, cmd);
> +
> + vecs.iovecs = res_iovs;
> + vecs.num_iovecs = res_niov;
> +
> + ret = rutabaga_resource_attach_backing(vr->rutabaga,
> att_rb.resource_id,
> + &vecs);
> + if (ret != 0) {
> + virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
> + }
> +}
> +
> +static void
> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> +{
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_detach_backing detach_rb;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(detach_rb);
> + trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
> +
> + res = virtio_gpu_find_resource(g, detach_rb.resource_id);
> + CHECK(res, cmd);
> +
> + rutabaga_resource_detach_backing(vr->rutabaga,
> + detach_rb.resource_id);
> +
> + virtio_gpu_cleanup_mapping(g, res);
> +}
> +
> +static void
> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_resource att_res;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(att_res);
> + trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
> + att_res.resource_id);
> +
> + result = rutabaga_context_attach_resource(vr->rutabaga,
> att_res.hdr.ctx_id,
> + att_res.resource_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_ctx_resource det_res;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(det_res);
> + trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
> + det_res.resource_id);
> +
> + result = rutabaga_context_detach_resource(vr->rutabaga,
> det_res.hdr.ctx_id,
> + det_res.resource_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command
> *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_get_capset_info info;
> + struct virtio_gpu_resp_capset_info resp;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(info);
> +
> + result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
> + &resp.capset_id,
> &resp.capset_max_version,
> + &resp.capset_max_size);
> + CHECK_RESULT(result, cmd);
> +
> + resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
> + virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> +}
> +
> +static void
> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + struct virtio_gpu_get_capset gc;
> + struct virtio_gpu_resp_capset *resp;
> + uint32_t capset_size;
> + uint32_t current_id;
> + bool found = false;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(gc);
> + for (uint32_t i = 0; i < vr->num_capsets; i++) {
> + result = rutabaga_get_capset_info(vr->rutabaga, i,
> + ¤t_id, &capset_size,
> + &capset_size);
> + CHECK_RESULT(result, cmd);
> +
> + if (current_id == gc.capset_id) {
> + found = true;
> + break;
> + }
> + }
> +
> + if (!found) {
> + error_report("capset not found!");
> + return;
> + }
> +
> + resp = g_malloc0(sizeof(*resp) + capset_size);
> + resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
> + rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
> + (uint8_t *)resp->capset_data, capset_size);
> +
> + virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) +
> capset_size);
> + g_free(resp);
> +}
> +
> +static void
> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int result;
> + struct rutabaga_iovecs vecs = { 0 };
> + g_autofree struct virtio_gpu_simple_resource *res = NULL;
> + struct virtio_gpu_simple_resource *resource;
> + struct virtio_gpu_resource_create_blob cblob;
> + struct rutabaga_create_blob rc_blob = { 0 };
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cblob);
> + trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
> +
> + CHECK(cblob.resource_id != 0, cmd);
> +
> + res = g_new0(struct virtio_gpu_simple_resource, 1);
> +
> + res->resource_id = cblob.resource_id;
> + res->blob_size = cblob.size;
> +
> + if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
> + result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
> + sizeof(cblob), cmd,
> &res->addrs,
> + &res->iov, &res->iov_cnt);
> + CHECK_RESULT(result, cmd);
> + }
> +
> + rc_blob.blob_id = cblob.blob_id;
> + rc_blob.blob_mem = cblob.blob_mem;
> + rc_blob.blob_flags = cblob.blob_flags;
> + rc_blob.size = cblob.size;
> +
> + vecs.iovecs = res->iov;
> + vecs.num_iovecs = res->iov_cnt;
> +
> + result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
> + cblob.resource_id, &rc_blob,
> &vecs,
> + NULL);
> + CHECK_RESULT(result, cmd);
> + resource = g_steal_pointer(&res);
> + QTAILQ_INSERT_HEAD(&g->reslist, resource, next);
> +}
> +
> +static void
> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + uint32_t slot = 0;
> + struct virtio_gpu_simple_resource *res;
> + struct rutabaga_mapping mapping = { 0 };
> + struct virtio_gpu_resource_map_blob mblob;
> + struct virtio_gpu_resp_map_info resp;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(mblob);
> +
> + CHECK(mblob.resource_id != 0, cmd);
> +
> + res = virtio_gpu_find_resource(g, mblob.resource_id);
> + CHECK(res, cmd);
> +
> + result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id,
> &mapping);
> + CHECK_RESULT(result, cmd);
> +
> + for (slot = 0; slot < MAX_SLOTS; slot++) {
> + if (memory_regions[slot].used) {
> + continue;
> + }
> +
> + MemoryRegion *mr = &(memory_regions[slot].mr);
> + memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
> + (void *)mapping.ptr);
> + memory_region_add_subregion(&g->parent_obj.hostmem,
> + mblob.offset, mr);
> + memory_regions[slot].resource_id = mblob.resource_id;
> + memory_regions[slot].used = 1;
> + break;
> + }
> +
> + CHECK((slot < MAX_SLOTS), cmd);
> +
> + memset(&resp, 0, sizeof(resp));
> + resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
> + result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
> + &resp.map_info);
> +
> + CHECK_RESULT(result, cmd);
> + virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
> +}
> +
> +static void
> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + int32_t result;
> + uint32_t slot = 0;
> + struct virtio_gpu_simple_resource *res;
> + struct virtio_gpu_resource_unmap_blob ublob;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(ublob);
> +
> + CHECK(ublob.resource_id != 0, cmd);
> +
> + res = virtio_gpu_find_resource(g, ublob.resource_id);
> + CHECK(res, cmd);
> +
> + for (slot = 0; slot < MAX_SLOTS; slot++) {
> + if (memory_regions[slot].resource_id != ublob.resource_id) {
> + continue;
> + }
> +
> + MemoryRegion *mr = &(memory_regions[slot].mr);
> + memory_region_del_subregion(&g->parent_obj.hostmem, mr);
> +
> + memory_regions[slot].resource_id = 0;
> + memory_regions[slot].used = 0;
> + break;
> + }
> +
> + CHECK((slot < MAX_SLOTS), cmd);
> + result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
> + struct virtio_gpu_ctrl_command *cmd)
> +{
> + struct rutabaga_fence fence = { 0 };
> + int32_t result;
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
> +
> + switch (cmd->cmd_hdr.type) {
> + case VIRTIO_GPU_CMD_CTX_CREATE:
> + rutabaga_cmd_context_create(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_CTX_DESTROY:
> + rutabaga_cmd_context_destroy(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
> + rutabaga_cmd_create_resource_2d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
> + rutabaga_cmd_create_resource_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_SUBMIT_3D:
> + rutabaga_cmd_submit_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
> + rutabaga_cmd_transfer_to_host_2d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
> + rutabaga_cmd_transfer_to_host_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
> + rutabaga_cmd_transfer_from_host_3d(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
> + rutabaga_cmd_attach_backing(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
> + rutabaga_cmd_detach_backing(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_SET_SCANOUT:
> + rutabaga_cmd_set_scanout(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
> + rutabaga_cmd_resource_flush(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_UNREF:
> + rutabaga_cmd_resource_unref(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
> + rutabaga_cmd_ctx_attach_resource(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
> + rutabaga_cmd_ctx_detach_resource(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
> + rutabaga_cmd_get_capset_info(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_CAPSET:
> + rutabaga_cmd_get_capset(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
> + virtio_gpu_get_display_info(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_GET_EDID:
> + virtio_gpu_get_edid(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
> + rutabaga_cmd_resource_create_blob(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
> + rutabaga_cmd_resource_map_blob(g, cmd);
> + break;
> + case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
> + rutabaga_cmd_resource_unmap_blob(g, cmd);
> + break;
> + default:
> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
> + break;
> + }
> +
> + if (cmd->finished) {
> + return;
> + }
> + if (cmd->error) {
> + error_report("%s: ctrl 0x%x, error 0x%x", __func__,
> + cmd->cmd_hdr.type, cmd->error);
> + virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
> + return;
> + }
> + if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
> + virtio_gpu_ctrl_response_nodata(g, cmd,
> VIRTIO_GPU_RESP_OK_NODATA);
> + return;
> + }
> +
> + fence.flags = cmd->cmd_hdr.flags;
> + fence.ctx_id = cmd->cmd_hdr.ctx_id;
> + fence.fence_id = cmd->cmd_hdr.fence_id;
> + fence.ring_idx = cmd->cmd_hdr.ring_idx;
> +
> + trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
> +
> + result = rutabaga_create_fence(vr->rutabaga, &fence);
> + CHECK_RESULT(result, cmd);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_aio_cb(void *opaque)
> +{
> + struct rutabaga_aio_data *data = (struct rutabaga_aio_data *)opaque;
> + VirtIOGPU *g = (VirtIOGPU *)data->vr;
> + struct rutabaga_fence fence_data = data->fence;
> + struct virtio_gpu_ctrl_command *cmd, *tmp;
> +
> + uint32_t signaled_ctx_specific = fence_data.flags &
> + RUTABAGA_FLAG_INFO_RING_IDX;
> +
> + QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
> + /*
> + * Due to context specific timelines.
> + */
> + uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
> + RUTABAGA_FLAG_INFO_RING_IDX;
> +
> + if (signaled_ctx_specific != target_ctx_specific) {
> + continue;
> + }
> +
> + if (signaled_ctx_specific &&
> + (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
> + continue;
> + }
> +
> + if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
> + continue;
> + }
> +
> + trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
> + virtio_gpu_ctrl_response_nodata(g, cmd,
> VIRTIO_GPU_RESP_OK_NODATA);
> + QTAILQ_REMOVE(&g->fenceq, cmd, next);
> + g_free(cmd);
> + }
> +
> + g_free(data);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> + struct rutabaga_fence fence_data) {
> + struct rutabaga_aio_data *data;
> + VirtIOGPU *g = (VirtIOGPU *)user_data;
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + /*
> + * gfxstream and both cross-domain (and even newer versions
> virglrenderer:
> + * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion
> on
> + * threads ("callback threads") that are different from the thread
> that
> + * processes the command queue ("main thread").
> + *
> + * crosvm and other virtio-gpu 1.1implementations enable callback
> threads
> + * via locking. However, on QEMU a deadlock is observed if
> + * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback]
> is used
> + * from a thread that is not the main thread.
> + *
> + * The reason is QEMU's internal locking is designed to work with QEMU
> + * threads (see rcu_register_thread()) and not generic C/C++/Rust
> threads.
> + * For now, we can workaround this by scheduling the return of the
> + * fence descriptors on the main thread.
> + */
> +
> + data = g_new0(struct rutabaga_aio_data, 1);
> + data->vr = vr;
> + data->fence = fence_data;
> + aio_bh_schedule_oneshot_full(vr->ctx, virtio_gpu_rutabaga_aio_cb,
> + (void *)data, "aio");
> +}
> +
> +static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
> +{
> + int result;
> + uint64_t capset_mask;
> + struct rutabaga_channels channels = { 0 };
> + struct rutabaga_builder builder = { 0 };
> +
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + vr->rutabaga = NULL;
> +
> + if (!vr->capset_names) {
> + return -EINVAL;
> + }
> +
> + builder.wsi = RUTABAGA_WSI_SURFACELESS;
> + /*
> + * Currently, if WSI is specified, the only valid strings are
> "surfaceless"
> + * or "headless". Surfaceless doesn't create a native window
> surface, but
> + * does copy from the render target to the Pixman buffer if a
> virtio-gpu
> + * 2D hypercall is issued. Surfacless is the default.
> + *
> + * Headless is like surfaceless, but doesn't copy to the Pixman
> buffer. The
> + * use case is automated testing environments where there is no need
> to view
> + * results.
> + *
> + * In the future, more performant virtio-gpu 2D UI integration may be
> added.
> + */
> + if (vr->wsi) {
> + if (!strcmp(vr->wsi, "surfaceless")) {
>
g_str_equal() is a bit more readable
> + vr->headless = false;
> + } else if (strcmp(vr->wsi, "headless")) {
> + vr->headless = true;
> + } else {
> + return -EINVAL;
> + }
> + }
> +
> + result = rutabaga_calculate_capset_mask(vr->capset_names,
> &capset_mask);
> + if (result) {
> + return result;
> + }
> +
> + /*
> + * rutabaga-0.1.1 is only compiled/tested with gfxstream and
> cross-domain
> + * support. Future versions may change this to have more context
> types if
> + * there is any interest.
> + */
> + if (capset_mask & (BIT(RUTABAGA_CAPSET_VIRGL) |
> + BIT(RUTABAGA_CAPSET_VIRGL2) |
> + BIT(RUTABAGA_CAPSET_VENUS) |
> + BIT(RUTABAGA_CAPSET_DRM))) {
> + return -EINVAL;
> + }
> +
> + builder.user_data = (uint64_t)(uintptr_t *)(void *)g;
>
GPOINTER_TO_UINT(g) ?
> + builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
> + builder.capset_mask = capset_mask;
> +
> + if (vr->wayland_socket_path) {
> + if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))
> == 0) {
> + return -EINVAL;
> + }
> +
> + channels.channels =
> + (struct rutabaga_channel *)calloc(1, sizeof(struct
> rutabaga_channel));
>
g_new0(struct ruabaga_channel, 1)
> + channels.num_channels = 1;
> + channels.channels[0].channel_name = vr->wayland_socket_path;
> + channels.channels[0].channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
> + builder.channels = &channels;
> + }
> +
> + result = rutabaga_init(&builder, &vr->rutabaga);
> + if (builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) {
> + free(channels.channels);
>
g_free() (after switching to g_new)
> + }
> +
> + memset(&memory_regions, 0, MAX_SLOTS * sizeof(struct
> MemoryRegionInfo));
> + vr->ctx = qemu_get_aio_context();
> + return result;
> +}
> +
> +static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
> +{
> + int result;
> + uint32_t num_capsets;
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> +
> + if (!vr->rutabaga_active) {
> + result = virtio_gpu_rutabaga_init(g);
> + if (result) {
> + error_report("Failed to init rutabaga");
> + return 0;
> + }
> +
> + vr->rutabaga_active = true;
> + }
> +
> + result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
> + if (result) {
> + error_report("Failed to get capsets");
> + return 0;
> + }
> + vr->num_capsets = num_capsets;
> + return num_capsets;
> +}
> +
> +static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue
> *vq)
> +{
> + VirtIOGPU *g = VIRTIO_GPU(vdev);
> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
> + struct virtio_gpu_ctrl_command *cmd;
> +
> + if (!virtio_queue_ready(vq)) {
> + return;
> + }
> +
> + if (!vr->rutabaga_active) {
> + int result = virtio_gpu_rutabaga_init(g);
> + if (!result) {
> + vr->rutabaga_active = true;
> + }
> + }
> +
> + if (!vr->rutabaga_active) {
> + return;
> + }
> +
> + cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> + while (cmd) {
> + cmd->vq = vq;
> + cmd->error = 0;
> + cmd->finished = false;
> + QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
> + cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
> + }
> +
> + virtio_gpu_process_cmdq(g);
> +}
> +
> +static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
> +{
> + int num_capsets;
> + VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
> + VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
> +
>
It would be simpler to call virtio_gpu_rutabaga_init() here, with Error
argument etc, instead of indirectly from other places.
+ num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
> + if (!num_capsets) {
> + return;
> + }
> +
> +#if HOST_BIG_ENDIAN
> + error_setg(errp, "rutabaga is not supported on bigendian platforms");
> + return;
> +#endif
> +
> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
> +
> + bdev->virtio_config.num_capsets = num_capsets;
> + virtio_gpu_device_realize(qdev, errp);
> +}
> +
> +static Property virtio_gpu_rutabaga_properties[] = {
> + DEFINE_PROP_STRING("capset_names", VirtioGpuRutabaga, capset_names),
> + DEFINE_PROP_STRING("wayland_socket_path", VirtioGpuRutabaga,
> + wayland_socket_path),
> + DEFINE_PROP_STRING("wsi", VirtioGpuRutabaga, wsi),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> +static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
> +{
> + DeviceClass *dc = DEVICE_CLASS(klass);
> + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
> + VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
> + VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
> +
> + vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
> + vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
> + vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
> + vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
> +
> + vdc->realize = virtio_gpu_rutabaga_realize;
> + device_class_set_props(dc, virtio_gpu_rutabaga_properties);
> +}
> +
> +static const TypeInfo virtio_gpu_rutabaga_info = {
> + .name = TYPE_VIRTIO_GPU_RUTABAGA,
> + .parent = TYPE_VIRTIO_GPU,
> + .instance_size = sizeof(VirtioGpuRutabaga),
> + .class_init = virtio_gpu_rutabaga_class_init,
> +};
> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
> +module_kconfig(VIRTIO_GPU);
> +
> +static void virtio_register_types(void)
> +{
> + type_register_static(&virtio_gpu_rutabaga_info);
> +}
> +
> +type_init(virtio_register_types)
> +
> +module_dep("hw-display-virtio-gpu");
> diff --git a/hw/display/virtio-vga-rutabaga.c
> b/hw/display/virtio-vga-rutabaga.c
> new file mode 100644
> index 0000000000..01831bd03f
> --- /dev/null
> +++ b/hw/display/virtio-vga-rutabaga.c
> @@ -0,0 +1,52 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include "qemu/osdep.h"
> +#include "hw/pci/pci.h"
> +#include "hw/qdev-properties.h"
> +#include "hw/virtio/virtio-gpu.h"
> +#include "hw/display/vga.h"
> +#include "qapi/error.h"
> +#include "qemu/module.h"
> +#include "virtio-vga.h"
> +#include "qom/object.h"
> +
> +#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
> +
> +typedef struct VirtIOVGARUTABAGA VirtIOVGARUTABAGA;
> +DECLARE_INSTANCE_CHECKER(VirtIOVGARUTABAGA, VIRTIO_VGA_RUTABAGA,
> + TYPE_VIRTIO_VGA_RUTABAGA)
> +
> +struct VirtIOVGARUTABAGA {
> + VirtIOVGABase parent_obj;
> +
> + VirtioGpuRutabaga vdev;
> +};
> +
> +static void virtio_vga_rutabaga_inst_initfn(Object *obj)
> +{
> + VirtIOVGARUTABAGA *dev = VIRTIO_VGA_RUTABAGA(obj);
> +
> + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
> + TYPE_VIRTIO_GPU_RUTABAGA);
> + VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
> +}
> +
> +static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
> + .generic_name = TYPE_VIRTIO_VGA_RUTABAGA,
> + .parent = TYPE_VIRTIO_VGA_BASE,
> + .instance_size = sizeof(VirtIOVGARUTABAGA),
> + .instance_init = virtio_vga_rutabaga_inst_initfn,
> +};
> +module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
> +module_kconfig(VIRTIO_VGA);
> +
> +static void virtio_vga_register_types(void)
> +{
> + if (have_vga) {
> + virtio_pci_types_register(&virtio_vga_rutabaga_info);
> + }
> +}
> +
> +type_init(virtio_vga_register_types)
> +
> +module_dep("hw-display-virtio-vga");
> --
> 2.41.0.255.g8b1d071c50-goog
>
>
>
--
Marc-André Lureau
[-- Attachment #2: Type: text/html, Size: 56071 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation
2023-07-11 2:56 ` [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
@ 2023-07-12 21:40 ` Akihiko Odaki
2023-07-13 1:28 ` Gurchetan Singh
0 siblings, 1 reply; 22+ messages in thread
From: Akihiko Odaki @ 2023-07-12 21:40 UTC (permalink / raw)
To: Gurchetan Singh, qemu-devel
Cc: kraxel, marcandre.lureau, dmitry.osipenko, ray.huang, alex.bennee,
shentey
On 2023/07/11 11:56, Gurchetan Singh wrote:
> This adds basic documentation for virtio-gpu.
Thank you for adding documentation for other backends too. I have been
asked how virtio-gpu works so many times and always had to explain by
myself though Gerd does have a nice article.* This documentation will help.
* https://www.kraxel.org/blog/2021/05/virtio-gpu-qemu-graphics-update/
>
> Suggested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> ---
> docs/system/device-emulation.rst | 1 +
> docs/system/devices/virtio-gpu.rst | 80 ++++++++++++++++++++++++++++++
> 2 files changed, 81 insertions(+)
> create mode 100644 docs/system/devices/virtio-gpu.rst
>
> diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
> index 4491c4cbf7..1167f3a9f2 100644
> --- a/docs/system/device-emulation.rst
> +++ b/docs/system/device-emulation.rst
> @@ -91,6 +91,7 @@ Emulated Devices
> devices/nvme.rst
> devices/usb.rst
> devices/vhost-user.rst
> + devices/virtio-gpu.rst
> devices/virtio-pmem.rst
> devices/vhost-user-rng.rst
> devices/canokey.rst
> diff --git a/docs/system/devices/virtio-gpu.rst b/docs/system/devices/virtio-gpu.rst
> new file mode 100644
> index 0000000000..2426039540
> --- /dev/null
> +++ b/docs/system/devices/virtio-gpu.rst
> @@ -0,0 +1,80 @@
> +..
> + SPDX-License-Identifier: GPL-2.0
> +
> +virtio-gpu
> +==========
> +
> +This document explains the setup and usage of the virtio-gpu device.
> +The virtio-gpu device paravirtualizes the GPU and display controller.
> +
> +Linux kernel support
> +--------------------
> +
> +virtio-gpu requires a guest Linux kernel built with the
> +``CONFIG_DRM_VIRTIO_GPU`` option.
> +
> +QEMU virtio-gpu variants
> +------------------------
> +
> +There are many virtio-gpu device variants, listed below:
> +
> + * ``virtio-vga``
> + * ``virtio-gpu-pci``
> + * ``virtio-vga-gl``
> + * ``virtio-gpu-gl-pci``
> + * ``virtio-vga-rutabaga``
> + * ``virtio-gpu-rutabaga-pci``
> + * ``vhost-user-vga``
> + * ``vhost-user-gl-pci``
> +
> +QEMU provides a 2D virtio-gpu backend, and two accelerated backends:
> +virglrenderer ('gl' device label) and rutabaga_gfx ('rutabaga' device
> +label). There is also a vhost-user backend that runs the 2D device > +in a separate process. Each device type as VGA or PCI variant. This
> +document uses the PCI variant in examples.
I suggest to replace "2D device" with "graphics stack"; vhost-user works
with 3D too. It's also slightly awkward to say a device runs in a
separate process as some portion of device emulation always stuck in
QEMU. In my opinion, the point of vhost-user backend is to isolate the
gigantic graphics stack so let's put this phrase.
I also have a bit different understanding regarding virtio-gpu variants.
First, the variants can be classified into VGA and non-VGA ones. The VGA
ones are prefixed with virtio-vga or vhost-user-vga while the non-VGA
ones are prefixed with virtio-gpu or vhost-user-gpu.
The VGA ones always use PCI interface, but for the non-VGA ones, you can
further pick simple MMIO or PCI. For MMIO, you can suffix the device
name with -device though vhost-user-gpu apparently does not support
MMIO. For PCI, you can suffix it with -pci. Without these suffixes, the
platform default will be chosen.
Since enumerating all variants will result in a long list, you may
provide abstract syntaxes like the following for this explanation:
* virtio-vga[-BACKEND]
* virtio-gpu[-BACKEND][-INTERFACE]
* vhost-user-vga
* vhost-user-pci
> +
> +virtio-gpu 2d
> +-------------
> +
> +The default 2D mode uses a guest software renderer (llvmpipe, lavapipe,
> +Swiftshader) to provide the OpenGL/Vulkan implementations.
It's certainly possible to use virtio-gpu without software
OpenGL/Vulkan. A major example is Windows; its software renderer is
somewhat limited in my understanding.
My suggestion:
The default 2D backend only performs 2D operations. The guest needs to
employ a software renderer for 3D graphics.
It's also better to provide links for the renderers. Apparently lavapipe
does not have a dedicated documentation, so you may add a link for Mesa
and mention them like:
LLVMpipe and Lavapipe included in `Mesa`_, or `SwiftShader`_
And I think it will be helpful to say LLVMpipe and Lavapipe work out of
box on typical modern Linux distributions as that should be what people
care.
> +
> +.. parsed-literal::
> + -device virtio-gpu-pci
> +
> +virtio-gpu virglrenderer
> +------------------------
> +
> +When using virgl accelerated graphics mode, OpenGL API calls are translated
> +into an intermediate representation (see `Gallium3D`_). The intermediate
> +representation is communicated to the host and the `virglrenderer`_ library
> +on the host translates the intermediate representation back to OpenGL API
> +calls.
It should be mentioned that the translation occurs in the guest side,
and the guest side component is included in Linux distributions as like
LLVMpipe and Lavapipe are.
> +
> +.. parsed-literal::
> + -device virtio-gpu-gl-pci
> +
> +.. _Gallium3D: https://www.freedesktop.org/wiki/Software/gallium/
> +.. _virglrenderer: https://gitlab.freedesktop.org/virgl/virglrenderer/
> +
> +virtio-gpu rutabaga
> +-------------------
> +
> +virtio-gpu can also leverage `rutabaga_gfx`_ to provide `gfxstream`_ rendering
> +and `Wayland display passthrough`_. With the gfxstream rendering mode, GLES
> +and Vulkan calls are forwarded directly to the host with minimal modification.
I find the description included in the PDF you posted on GitLab* quite a
useful so I suggest to incorporate its content.
You may omit the overall design diagram as it mentions guest side and
Rutabaga details and crosvm and may be confusing for QEMU users.
The detailed commands for building dependencies may also be omitted and
instead point to the documentation of respective projects as they should
be subject to future changes.
It's unfortunate that rutabaga_gfx and goldfish-opengl do not come with
proper documentations (and I wonder rutabaga_gfx still need a hack
mentioned in the PDF). For now the procedure to build them should be
included in the documentation since it will take hours to figure out for
a first-time reader otherwise.
*
https://gitlab.com/qemu-project/qemu/uploads/f960580bf0f19077e0330960b4a3152e/gfxstream_+_QEMU_setup__public_.pdf
> +
> +Please refer the `crosvm book`_ on how to setup the guest for Wayland
> +passthrough (QEMU uses the same implementation).
> +
> +This device does require host blob support (``hostmem`` field below), but not
> +all capsets (``capset_names`` below) have to enabled when starting the device.
> +
> +.. parsed-literal::
> + -device virtio-gpu-rutabaga-pci,capset_names=gfxstream-vulkan:cross-domain,\\
> + hostmem=8G,wayland_socket_path="$XDG_RUNTIME_DIR/$WAYLAND_DISPLAY"
> +
> +.. _rutabaga_gfx: https://github.com/google/crosvm/blob/main/rutabaga_gfx/ffi/src/include/rutabaga_gfx_ffi.h
> +.. _gfxstream: https://android.googlesource.com/platform/hardware/google/gfxstream/
> +.. _Wayland display passthrough: https://www.youtube.com/watch?v=OZJiHMtIQ2M
> +.. _crosvm book: https://crosvm.dev/book/devices/wayland.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream
2023-07-12 19:14 ` Marc-André Lureau
@ 2023-07-13 1:27 ` Gurchetan Singh
0 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-13 1:27 UTC (permalink / raw)
To: Marc-André Lureau
Cc: qemu-devel, akihiko.odaki, dmitry.osipenko, ray.huang,
alex.bennee, shentey, Gerd Hoffmann
On Wed, Jul 12, 2023 at 12:15 PM Marc-André Lureau
<marcandre.lureau@gmail.com> wrote:
>
> Hi
>
> On Tue, Jul 11, 2023 at 6:57 AM Gurchetan Singh <gurchetansingh@chromium.org> wrote:
>>
>> This adds initial support for gfxstream and cross-domain. Both
>> features rely on virtio-gpu blob resources and context types, which
>> are also implemented in this patch.
>>
>> gfxstream has a long and illustrious history in Android graphics
>> paravirtualization. It has been powering graphics in the Android
>> Studio Emulator for more than a decade, which is the main developer
>> platform.
>>
>> Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>> The key design characteristic was a 1:1 threading model and
>> auto-generation, which fit nicely with the OpenGLES spec. It also
>> allowed easy layering with ANGLE on the host, which provides the GLES
>> implementations on Windows or MacOS enviroments.
>>
>> gfxstream has traditionally been maintained by a single engineer, and
>> between 2015 to 2021, the goldfish throne passed to Frank Yang.
>> Historians often remark this glorious reign ("pax gfxstreama" is the
>> academic term) was comparable to that of Augustus and the both Queen
>> Elizabeths. Just to name a few accomplishments in a resplendent
>> panoply: higher versions of GLES, address space graphics, snapshot
>> support and CTS compliant Vulkan [b].
>>
>> One major drawback was the use of out-of-tree goldfish drivers.
>> Android engineers didn't know much about DRM/KMS and especially TTM so
>> a simple guest to host pipe was conceived.
>>
>> Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>> the Mesa/virglrenderer communities. In 2018, the initial virtio-gpu
>> port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>> It was a symbol compatible replacement of virglrenderer [c] and named
>> "AVDVirglrenderer". This implementation forms the basis of the
>> current gfxstream host implementation still in use today.
>>
>> cross-domain support follows a similar arc. Originally conceived by
>> Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>> 2018, it initially relied on the downstream "virtio-wl" device.
>>
>> In 2020 and 2021, virtio-gpu was extended to include blob resources
>> and multiple timelines by yours truly, features gfxstream/cross-domain
>> both require to function correctly.
>>
>> Right now, we stand at the precipice of a truly fantastic possibility:
>> the Android Emulator powered by upstream QEMU and upstream Linux
>> kernel. gfxstream will then be packaged properfully, and app
>> developers can even fix gfxstream bugs on their own if they encounter
>> them.
>>
>> It's been quite the ride, my friends. Where will gfxstream head next,
>> nobody really knows. I wouldn't be surprised if it's around for
>> another decade, maintained by a new generation of Android graphics
>> enthusiasts.
>>
>> Technical details:
>> - Very simple initial display integration: just used Pixman
>> - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
>> calls
>>
>
> Wow, this is not for the faint reader.. there is a lot to grasp in this gfx space...
>
> Could you perhaps extend on what this current code can do for an average Linux VM? or for some Android VM (which one?!), and then what are the next steps and status?
- For Linux VMs + Linux hosts, this provides more modern display
virtualization via Wayland passthrough. It also is a performance
benefit since you can avoid a guest compositor pass. For widespread
distribution, someone needs to package Sommelier or the
wayland-proxy-virtwl [a] Linux distro style. In addition newer
versions [b] of the Linux kernel come with DRM_VIRTIO_GPU_KMS, which
allow disabling KMS hypercalls. I suppose someone can come up with a
Linux VM variant that automatically starts the Sommelier or
wayland-proxy-virtwl and some terminal app.
- For Android VMs, you can boot with gfxstream GLES/Vulkan now with
upstream QEMU with a simple UI. The next step would be improving
display integration and UI interfaces with the goal of the QEMU
upstream graphics being in an emulator release [c].
Will add these details to the commit message in v2.
[a] https://github.com/talex5/wayland-proxy-virtwl
[b] https://lore.kernel.org/lkml/20230302233506.3146290-1-robdclark@gmail.com/
[c] https://developer.android.com/studio/releases/emulator
>
> My limited understanding (from this series and from https://gitlab.com/qemu-project/qemu/-/issues/1611) is that it allows passing-through some vulkan APIs for off-screen usage. Is that accurate?
For Linux VMs, it's currently offscreen accelerated rendering only.
For Android VMs, on-screen does work, but for simplicity a memcpy does
occur when flushing to the scanout.
>
> How far are we from getting upstream QEMU to be used by Android Emulator? (in the gfx domain at least) What would it take to get the average Linux VM to use virtio-vga-rutabaga instead of virtio-vga-gl to get accelerated rendering?
We have both offscreen (running automated CTS tests in headless
environments) and on-screen Android emulator use cases. For
offscreen, this patch series can be used out-of-the-box. For onscreen
it works but there's that memcpy. Fixing that will be the next step
after this patch series.
For Linux VMs, we do plan to add Wayland WSI to gfxstream Vulkan for
testing/debug/performance purposes. And we can always compile
rutabaga_gfx's virglrenderer bindings (the current patchset doesn't do
this for simplicity + lack of testing cycles) in there's sufficient
interest in Linux VMs + accelerated rendering + virtio-vga-rutabaga.
>
>> [a] https://android-review.googlesource.com/c/platform/development/+/34470
>> [b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>> [c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>>
>> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>> ---
>> v2: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
>> - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
>> - Used error_report(..)
>> - Used g_autofree to fix leaks on error paths
>> - Removed unnecessary casts
>> - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>>
>> hw/display/virtio-gpu-pci-rutabaga.c | 48 ++
>> hw/display/virtio-gpu-rutabaga.c | 1088 ++++++++++++++++++++++++++
>> hw/display/virtio-vga-rutabaga.c | 52 ++
>> 3 files changed, 1188 insertions(+)
>> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
>> create mode 100644 hw/display/virtio-gpu-rutabaga.c
>> create mode 100644 hw/display/virtio-vga-rutabaga.c
>>
>> diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
>> new file mode 100644
>> index 0000000000..5765bef266
>> --- /dev/null
>> +++ b/hw/display/virtio-gpu-pci-rutabaga.c
>> @@ -0,0 +1,48 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include "qemu/osdep.h"
>> +#include "qapi/error.h"
>> +#include "qemu/module.h"
>> +#include "hw/pci/pci.h"
>> +#include "hw/qdev-properties.h"
>> +#include "hw/virtio/virtio.h"
>> +#include "hw/virtio/virtio-bus.h"
>> +#include "hw/virtio/virtio-gpu-pci.h"
>> +#include "qom/object.h"
>> +
>> +#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>> +typedef struct VirtIOGPURUTABAGAPCI VirtIOGPURUTABAGAPCI;
>> +DECLARE_INSTANCE_CHECKER(VirtIOGPURUTABAGAPCI, VIRTIO_GPU_RUTABAGA_PCI,
>> + TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>> +
>> +struct VirtIOGPURUTABAGAPCI {
>> + VirtIOGPUPCIBase parent_obj;
>> + VirtioGpuRutabaga vdev;
>> +};
>> +
>> +static void virtio_gpu_rutabaga_initfn(Object *obj)
>> +{
>> + VirtIOGPURUTABAGAPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>> +
>> + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> + TYPE_VIRTIO_GPU_RUTABAGA);
>> + VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> +}
>> +
>> +static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>> + .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>> + .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>> + .instance_size = sizeof(VirtIOGPURUTABAGAPCI),
>> + .instance_init = virtio_gpu_rutabaga_initfn,
>> +};
>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>> +module_kconfig(VIRTIO_PCI);
>> +
>> +static void virtio_gpu_rutabaga_pci_register_types(void)
>> +{
>> + virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>> +}
>> +
>> +type_init(virtio_gpu_rutabaga_pci_register_types)
>> +
>> +module_dep("hw-display-virtio-gpu-pci");
>> diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
>> new file mode 100644
>> index 0000000000..b60a30a093
>> --- /dev/null
>> +++ b/hw/display/virtio-gpu-rutabaga.c
>> @@ -0,0 +1,1088 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include "qemu/osdep.h"
>> +#include "qemu/error-report.h"
>> +#include "qemu/iov.h"
>> +#include "trace.h"
>> +#include "hw/virtio/virtio.h"
>> +#include "hw/virtio/virtio-gpu.h"
>> +#include "hw/virtio/virtio-gpu-pixman.h"
>> +#include "hw/virtio/virtio-iommu.h"
>> +
>> +#include <glib/gmem.h>
>> +#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>> +
>> +#define CHECK(condition, cmd) \
>> + do { \
>> + if (!condition) { \
>> + error_report("CHECK failed in %s() %s:" "%d", __func__, \
>> + __FILE__, __LINE__); \
>> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; \
>> + return; \
>> + } \
>> + } while (0)
>> +
>> +#define CHECK_RESULT(result, cmd) CHECK(result == 0, cmd)
>> +
>> +#define MAX_SLOTS 4096
>> +
>> +struct MemoryRegionInfo {
>> + int used;
>> + MemoryRegion mr;
>> + uint32_t resource_id;
>> +};
>> +
>> +static struct MemoryRegionInfo memory_regions[MAX_SLOTS];
>> +
>> +struct rutabaga_aio_data {
>> + struct VirtioGpuRutabaga *vr;
>> + struct rutabaga_fence fence;
>> +};
>> +
>> +static void
>> +virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
>> + uint32_t resource_id)
>> +{
>> + struct virtio_gpu_simple_resource *res;
>> + struct rutabaga_transfer transfer = { 0 };
>> + struct iovec transfer_iovec;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + res = virtio_gpu_find_resource(g, resource_id);
>> + if (!res) {
>> + return;
>> + }
>> +
>> + if (res->width != s->current_cursor->width ||
>> + res->height != s->current_cursor->height) {
>> + return;
>> + }
>> +
>> + transfer.x = 0;
>> + transfer.y = 0;
>> + transfer.z = 0;
>> + transfer.w = res->width;
>> + transfer.h = res->height;
>> + transfer.d = 1;
>> +
>> + transfer_iovec.iov_base = (void *)s->current_cursor->data;
>> + transfer_iovec.iov_len = res->width * res->height * 4;
>> +
>> + rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> + resource_id, &transfer,
>> + &transfer_iovec);
>> +}
>> +
>> +static void
>> +virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>> +{
>> + VirtIOGPU *g = VIRTIO_GPU(b);
>> + virtio_gpu_process_cmdq(g);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct rutabaga_create_3d rc_3d = { 0 };
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_resource_create_2d c2d;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(c2d);
>> + trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>> + c2d.width, c2d.height);
>> +
>> + rc_3d.target = 2;
>> + rc_3d.format = c2d.format;
>> + rc_3d.bind = (1 << 1);
>> + rc_3d.width = c2d.width;
>> + rc_3d.height = c2d.height;
>> + rc_3d.depth = 1;
>> + rc_3d.array_size = 1;
>> + rc_3d.last_level = 0;
>> + rc_3d.nr_samples = 0;
>> + rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>> +
>> + result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
>> + CHECK_RESULT(result, cmd);
>> +
>> + res = g_new0(struct virtio_gpu_simple_resource, 1);
>> + res->width = c2d.width;
>> + res->height = c2d.height;
>> + res->format = c2d.format;
>> + res->resource_id = c2d.resource_id;
>> +
>> + QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct rutabaga_create_3d rc_3d = { 0 };
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_resource_create_3d c3d;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(c3d);
>> +
>> + trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>> + c3d.width, c3d.height, c3d.depth);
>> +
>> + rc_3d.target = c3d.target;
>> + rc_3d.format = c3d.format;
>> + rc_3d.bind = c3d.bind;
>> + rc_3d.width = c3d.width;
>> + rc_3d.height = c3d.height;
>> + rc_3d.depth = c3d.depth;
>> + rc_3d.array_size = c3d.array_size;
>> + rc_3d.last_level = c3d.last_level;
>> + rc_3d.nr_samples = c3d.nr_samples;
>> + rc_3d.flags = c3d.flags;
>> +
>> + result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
>> + CHECK_RESULT(result, cmd);
>> +
>> + res = g_new0(struct virtio_gpu_simple_resource, 1);
>> + res->width = c3d.width;
>> + res->height = c3d.height;
>> + res->format = c3d.format;
>> + res->resource_id = c3d.resource_id;
>> +
>> + QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_resource_unref(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_resource_unref unref;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(unref);
>> +
>> + trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>> +
>> + res = virtio_gpu_find_resource(g, unref.resource_id);
>> + CHECK(res, cmd);
>> +
>> + result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>> + CHECK_RESULT(result, cmd);
>> +
>> + if (res->image) {
>> + pixman_image_unref(res->image);
>> + }
>> +
>> + QTAILQ_REMOVE(&g->reslist, res, next);
>> + g_free(res);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_context_create(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_ctx_create cc;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(cc);
>> + trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>> + cc.debug_name);
>> +
>> + result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>> + cc.context_init, cc.debug_name, cc.nlen);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_context_destroy(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_ctx_destroy cd;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(cd);
>> + trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>> +
>> + result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result, i;
>> + struct virtio_gpu_scanout *scanout = NULL;
>> + struct virtio_gpu_simple_resource *res;
>> + struct rutabaga_transfer transfer = { 0 };
>> + struct iovec transfer_iovec;
>> + struct virtio_gpu_resource_flush rf;
>> + bool found = false;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> + if (vr->headless) {
>> + return;
>> + }
>> +
>> + VIRTIO_GPU_FILL_CMD(rf);
>> + trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>> + rf.r.width, rf.r.height, rf.r.x, rf.r.y);
>> +
>> + res = virtio_gpu_find_resource(g, rf.resource_id);
>> + CHECK(res, cmd);
>> +
>> + for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>> + scanout = &g->parent_obj.scanout[i];
>> + if (i == res->scanout_bitmask) {
>> + found = true;
>> + break;
>> + }
>> + }
>> +
>> + if (!found) {
>> + return;
>> + }
>> +
>> + transfer.x = 0;
>> + transfer.y = 0;
>> + transfer.z = 0;
>> + transfer.w = res->width;
>> + transfer.h = res->height;
>> + transfer.d = 1;
>> +
>> + transfer_iovec.iov_base = (void *)pixman_image_get_data(res->image);
>> + transfer_iovec.iov_len = res->width * res->height * 4;
>> +
>> + result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>> + rf.resource_id, &transfer,
>> + &transfer_iovec);
>> + CHECK_RESULT(result, cmd);
>> + dpy_gfx_update_full(scanout->con);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_scanout *scanout = NULL;
>> + struct virtio_gpu_set_scanout ss;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> + if (vr->headless) {
>> + return;
>> + }
>> +
>> + VIRTIO_GPU_FILL_CMD(ss);
>> + trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>> + ss.r.width, ss.r.height, ss.r.x, ss.r.y);
>> +
>> + scanout = &g->parent_obj.scanout[ss.scanout_id];
>> + g->parent_obj.enable = 1;
>> +
>> + if (ss.resource_id == 0) {
>> + return;
>> + }
>> +
>> + res = virtio_gpu_find_resource(g, ss.resource_id);
>> + CHECK(res, cmd);
>> +
>> + if (!res->image) {
>> + pixman_format_code_t pformat;
>> + pformat = virtio_gpu_get_pixman_format(res->format);
>> + CHECK(pformat, cmd);
>> +
>> + res->image = pixman_image_create_bits(pformat,
>> + res->width,
>> + res->height,
>> + NULL, 0);
>> + CHECK(res->image, cmd);
>> + pixman_image_ref(res->image);
>> + }
>> +
>> + /* realloc the surface ptr */
>> + scanout->ds = qemu_create_displaysurface_pixman(res->image);
>> + dpy_gfx_replace_surface(scanout->con, NULL);
>> + dpy_gfx_replace_surface(scanout->con, scanout->ds);
>> + res->scanout_bitmask = ss.scanout_id;
>> +}
>> +
>> +static void
>> +rutabaga_cmd_submit_3d(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_cmd_submit cs;
>> + g_autofree uint8_t *buf = NULL;
>> + size_t s;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(cs);
>> + trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>> +
>> + buf = g_new0(uint8_t, cs.size);
>> + s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>> + sizeof(cs), buf, cs.size);
>> + CHECK((s == cs.size), cmd);
>> +
>> + result = rutabaga_submit_command(vr->rutabaga, cs.hdr.ctx_id, buf, cs.size);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct rutabaga_transfer transfer = { 0 };
>> + struct virtio_gpu_transfer_to_host_2d t2d;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(t2d);
>> + trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>> +
>> + transfer.x = t2d.r.x;
>> + transfer.y = t2d.r.y;
>> + transfer.z = 0;
>> + transfer.w = t2d.r.width;
>> + transfer.h = t2d.r.height;
>> + transfer.d = 1;
>> +
>> + result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
>> + &transfer);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct rutabaga_transfer transfer = { 0 };
>> + struct virtio_gpu_transfer_host_3d t3d;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(t3d);
>> + trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>> +
>> + transfer.x = t3d.box.x;
>> + transfer.y = t3d.box.y;
>> + transfer.z = t3d.box.z;
>> + transfer.w = t3d.box.w;
>> + transfer.h = t3d.box.h;
>> + transfer.d = t3d.box.d;
>> + transfer.level = t3d.level;
>> + transfer.stride = t3d.stride;
>> + transfer.layer_stride = t3d.layer_stride;
>> + transfer.offset = t3d.offset;
>> +
>> + result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
>> + t3d.resource_id, &transfer);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct rutabaga_transfer transfer = { 0 };
>> + struct virtio_gpu_transfer_host_3d t3d;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(t3d);
>> + trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>> +
>> + transfer.x = t3d.box.x;
>> + transfer.y = t3d.box.y;
>> + transfer.z = t3d.box.z;
>> + transfer.w = t3d.box.w;
>> + transfer.h = t3d.box.h;
>> + transfer.d = t3d.box.d;
>> + transfer.level = t3d.level;
>> + transfer.stride = t3d.stride;
>> + transfer.layer_stride = t3d.layer_stride;
>> + transfer.offset = t3d.offset;
>> +
>> + result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
>> + t3d.resource_id, &transfer, NULL);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + struct rutabaga_iovecs vecs = { 0 };
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_resource_attach_backing att_rb;
>> + struct iovec *res_iovs;
>> + uint32_t res_niov;
>> + int ret;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(att_rb);
>> + trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>> +
>> + res = virtio_gpu_find_resource(g, att_rb.resource_id);
>> + CHECK(res, cmd);
>> + CHECK(!res->iov, cmd);
>> +
>> + ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
>> + cmd, NULL, &res_iovs, &res_niov);
>> + CHECK_RESULT(ret, cmd);
>> +
>> + vecs.iovecs = res_iovs;
>> + vecs.num_iovecs = res_niov;
>> +
>> + ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
>> + &vecs);
>> + if (ret != 0) {
>> + virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
>> + }
>> +}
>> +
>> +static void
>> +rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_resource_detach_backing detach_rb;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(detach_rb);
>> + trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>> +
>> + res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>> + CHECK(res, cmd);
>> +
>> + rutabaga_resource_detach_backing(vr->rutabaga,
>> + detach_rb.resource_id);
>> +
>> + virtio_gpu_cleanup_mapping(g, res);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_ctx_resource att_res;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(att_res);
>> + trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>> + att_res.resource_id);
>> +
>> + result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
>> + att_res.resource_id);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_ctx_resource det_res;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(det_res);
>> + trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>> + det_res.resource_id);
>> +
>> + result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
>> + det_res.resource_id);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_get_capset_info info;
>> + struct virtio_gpu_resp_capset_info resp;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(info);
>> +
>> + result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>> + &resp.capset_id, &resp.capset_max_version,
>> + &resp.capset_max_size);
>> + CHECK_RESULT(result, cmd);
>> +
>> + resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>> + virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> +}
>> +
>> +static void
>> +rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + struct virtio_gpu_get_capset gc;
>> + struct virtio_gpu_resp_capset *resp;
>> + uint32_t capset_size;
>> + uint32_t current_id;
>> + bool found = false;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(gc);
>> + for (uint32_t i = 0; i < vr->num_capsets; i++) {
>> + result = rutabaga_get_capset_info(vr->rutabaga, i,
>> + ¤t_id, &capset_size,
>> + &capset_size);
>> + CHECK_RESULT(result, cmd);
>> +
>> + if (current_id == gc.capset_id) {
>> + found = true;
>> + break;
>> + }
>> + }
>> +
>> + if (!found) {
>> + error_report("capset not found!");
>> + return;
>> + }
>> +
>> + resp = g_malloc0(sizeof(*resp) + capset_size);
>> + resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>> + rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>> + (uint8_t *)resp->capset_data, capset_size);
>> +
>> + virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
>> + g_free(resp);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int result;
>> + struct rutabaga_iovecs vecs = { 0 };
>> + g_autofree struct virtio_gpu_simple_resource *res = NULL;
>> + struct virtio_gpu_simple_resource *resource;
>> + struct virtio_gpu_resource_create_blob cblob;
>> + struct rutabaga_create_blob rc_blob = { 0 };
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(cblob);
>> + trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>> +
>> + CHECK(cblob.resource_id != 0, cmd);
>> +
>> + res = g_new0(struct virtio_gpu_simple_resource, 1);
>> +
>> + res->resource_id = cblob.resource_id;
>> + res->blob_size = cblob.size;
>> +
>> + if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>> + result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>> + sizeof(cblob), cmd, &res->addrs,
>> + &res->iov, &res->iov_cnt);
>> + CHECK_RESULT(result, cmd);
>> + }
>> +
>> + rc_blob.blob_id = cblob.blob_id;
>> + rc_blob.blob_mem = cblob.blob_mem;
>> + rc_blob.blob_flags = cblob.blob_flags;
>> + rc_blob.size = cblob.size;
>> +
>> + vecs.iovecs = res->iov;
>> + vecs.num_iovecs = res->iov_cnt;
>> +
>> + result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
>> + cblob.resource_id, &rc_blob, &vecs,
>> + NULL);
>> + CHECK_RESULT(result, cmd);
>> + resource = g_steal_pointer(&res);
>> + QTAILQ_INSERT_HEAD(&g->reslist, resource, next);
>> +}
>> +
>> +static void
>> +rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + uint32_t slot = 0;
>> + struct virtio_gpu_simple_resource *res;
>> + struct rutabaga_mapping mapping = { 0 };
>> + struct virtio_gpu_resource_map_blob mblob;
>> + struct virtio_gpu_resp_map_info resp;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(mblob);
>> +
>> + CHECK(mblob.resource_id != 0, cmd);
>> +
>> + res = virtio_gpu_find_resource(g, mblob.resource_id);
>> + CHECK(res, cmd);
>> +
>> + result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
>> + CHECK_RESULT(result, cmd);
>> +
>> + for (slot = 0; slot < MAX_SLOTS; slot++) {
>> + if (memory_regions[slot].used) {
>> + continue;
>> + }
>> +
>> + MemoryRegion *mr = &(memory_regions[slot].mr);
>> + memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>> + (void *)mapping.ptr);
>> + memory_region_add_subregion(&g->parent_obj.hostmem,
>> + mblob.offset, mr);
>> + memory_regions[slot].resource_id = mblob.resource_id;
>> + memory_regions[slot].used = 1;
>> + break;
>> + }
>> +
>> + CHECK((slot < MAX_SLOTS), cmd);
>> +
>> + memset(&resp, 0, sizeof(resp));
>> + resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>> + result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
>> + &resp.map_info);
>> +
>> + CHECK_RESULT(result, cmd);
>> + virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>> +}
>> +
>> +static void
>> +rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + int32_t result;
>> + uint32_t slot = 0;
>> + struct virtio_gpu_simple_resource *res;
>> + struct virtio_gpu_resource_unmap_blob ublob;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(ublob);
>> +
>> + CHECK(ublob.resource_id != 0, cmd);
>> +
>> + res = virtio_gpu_find_resource(g, ublob.resource_id);
>> + CHECK(res, cmd);
>> +
>> + for (slot = 0; slot < MAX_SLOTS; slot++) {
>> + if (memory_regions[slot].resource_id != ublob.resource_id) {
>> + continue;
>> + }
>> +
>> + MemoryRegion *mr = &(memory_regions[slot].mr);
>> + memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>> +
>> + memory_regions[slot].resource_id = 0;
>> + memory_regions[slot].used = 0;
>> + break;
>> + }
>> +
>> + CHECK((slot < MAX_SLOTS), cmd);
>> + result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>> + struct virtio_gpu_ctrl_command *cmd)
>> +{
>> + struct rutabaga_fence fence = { 0 };
>> + int32_t result;
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>> +
>> + switch (cmd->cmd_hdr.type) {
>> + case VIRTIO_GPU_CMD_CTX_CREATE:
>> + rutabaga_cmd_context_create(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_CTX_DESTROY:
>> + rutabaga_cmd_context_destroy(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>> + rutabaga_cmd_create_resource_2d(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>> + rutabaga_cmd_create_resource_3d(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_SUBMIT_3D:
>> + rutabaga_cmd_submit_3d(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>> + rutabaga_cmd_transfer_to_host_2d(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>> + rutabaga_cmd_transfer_to_host_3d(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>> + rutabaga_cmd_transfer_from_host_3d(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>> + rutabaga_cmd_attach_backing(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>> + rutabaga_cmd_detach_backing(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_SET_SCANOUT:
>> + rutabaga_cmd_set_scanout(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>> + rutabaga_cmd_resource_flush(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>> + rutabaga_cmd_resource_unref(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>> + rutabaga_cmd_ctx_attach_resource(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>> + rutabaga_cmd_ctx_detach_resource(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>> + rutabaga_cmd_get_capset_info(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_GET_CAPSET:
>> + rutabaga_cmd_get_capset(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>> + virtio_gpu_get_display_info(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_GET_EDID:
>> + virtio_gpu_get_edid(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>> + rutabaga_cmd_resource_create_blob(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>> + rutabaga_cmd_resource_map_blob(g, cmd);
>> + break;
>> + case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>> + rutabaga_cmd_resource_unmap_blob(g, cmd);
>> + break;
>> + default:
>> + cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>> + break;
>> + }
>> +
>> + if (cmd->finished) {
>> + return;
>> + }
>> + if (cmd->error) {
>> + error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>> + cmd->cmd_hdr.type, cmd->error);
>> + virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>> + return;
>> + }
>> + if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>> + virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
>> + return;
>> + }
>> +
>> + fence.flags = cmd->cmd_hdr.flags;
>> + fence.ctx_id = cmd->cmd_hdr.ctx_id;
>> + fence.fence_id = cmd->cmd_hdr.fence_id;
>> + fence.ring_idx = cmd->cmd_hdr.ring_idx;
>> +
>> + trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
>> +
>> + result = rutabaga_create_fence(vr->rutabaga, &fence);
>> + CHECK_RESULT(result, cmd);
>> +}
>> +
>> +static void
>> +virtio_gpu_rutabaga_aio_cb(void *opaque)
>> +{
>> + struct rutabaga_aio_data *data = (struct rutabaga_aio_data *)opaque;
>> + VirtIOGPU *g = (VirtIOGPU *)data->vr;
>> + struct rutabaga_fence fence_data = data->fence;
>> + struct virtio_gpu_ctrl_command *cmd, *tmp;
>> +
>> + uint32_t signaled_ctx_specific = fence_data.flags &
>> + RUTABAGA_FLAG_INFO_RING_IDX;
>> +
>> + QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>> + /*
>> + * Due to context specific timelines.
>> + */
>> + uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>> + RUTABAGA_FLAG_INFO_RING_IDX;
>> +
>> + if (signaled_ctx_specific != target_ctx_specific) {
>> + continue;
>> + }
>> +
>> + if (signaled_ctx_specific &&
>> + (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>> + continue;
>> + }
>> +
>> + if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>> + continue;
>> + }
>> +
>> + trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>> + virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
>> + QTAILQ_REMOVE(&g->fenceq, cmd, next);
>> + g_free(cmd);
>> + }
>> +
>> + g_free(data);
>> +}
>> +
>> +static void
>> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>> + struct rutabaga_fence fence_data) {
>> + struct rutabaga_aio_data *data;
>> + VirtIOGPU *g = (VirtIOGPU *)user_data;
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + /*
>> + * gfxstream and both cross-domain (and even newer versions virglrenderer:
>> + * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
>> + * threads ("callback threads") that are different from the thread that
>> + * processes the command queue ("main thread").
>> + *
>> + * crosvm and other virtio-gpu 1.1implementations enable callback threads
>> + * via locking. However, on QEMU a deadlock is observed if
>> + * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
>> + * from a thread that is not the main thread.
>> + *
>> + * The reason is QEMU's internal locking is designed to work with QEMU
>> + * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
>> + * For now, we can workaround this by scheduling the return of the
>> + * fence descriptors on the main thread.
>> + */
>> +
>> + data = g_new0(struct rutabaga_aio_data, 1);
>> + data->vr = vr;
>> + data->fence = fence_data;
>> + aio_bh_schedule_oneshot_full(vr->ctx, virtio_gpu_rutabaga_aio_cb,
>> + (void *)data, "aio");
>> +}
>> +
>> +static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
>> +{
>> + int result;
>> + uint64_t capset_mask;
>> + struct rutabaga_channels channels = { 0 };
>> + struct rutabaga_builder builder = { 0 };
>> +
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> + vr->rutabaga = NULL;
>> +
>> + if (!vr->capset_names) {
>> + return -EINVAL;
>> + }
>> +
>> + builder.wsi = RUTABAGA_WSI_SURFACELESS;
>> + /*
>> + * Currently, if WSI is specified, the only valid strings are "surfaceless"
>> + * or "headless". Surfaceless doesn't create a native window surface, but
>> + * does copy from the render target to the Pixman buffer if a virtio-gpu
>> + * 2D hypercall is issued. Surfacless is the default.
>> + *
>> + * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
>> + * use case is automated testing environments where there is no need to view
>> + * results.
>> + *
>> + * In the future, more performant virtio-gpu 2D UI integration may be added.
>> + */
>> + if (vr->wsi) {
>> + if (!strcmp(vr->wsi, "surfaceless")) {
>
>
> g_str_equal() is a bit more readable
>
>>
>> + vr->headless = false;
>> + } else if (strcmp(vr->wsi, "headless")) {
>> + vr->headless = true;
>> + } else {
>> + return -EINVAL;
>> + }
>> + }
>> +
>> + result = rutabaga_calculate_capset_mask(vr->capset_names, &capset_mask);
>> + if (result) {
>> + return result;
>> + }
>> +
>> + /*
>> + * rutabaga-0.1.1 is only compiled/tested with gfxstream and cross-domain
>> + * support. Future versions may change this to have more context types if
>> + * there is any interest.
>> + */
>> + if (capset_mask & (BIT(RUTABAGA_CAPSET_VIRGL) |
>> + BIT(RUTABAGA_CAPSET_VIRGL2) |
>> + BIT(RUTABAGA_CAPSET_VENUS) |
>> + BIT(RUTABAGA_CAPSET_DRM))) {
>> + return -EINVAL;
>> + }
>> +
>> + builder.user_data = (uint64_t)(uintptr_t *)(void *)g;
>
>
> GPOINTER_TO_UINT(g) ?
>
>>
>> + builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
>> + builder.capset_mask = capset_mask;
>> +
>> + if (vr->wayland_socket_path) {
>> + if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) == 0) {
>> + return -EINVAL;
>> + }
>> +
>> + channels.channels =
>> + (struct rutabaga_channel *)calloc(1, sizeof(struct rutabaga_channel));
>
>
> g_new0(struct ruabaga_channel, 1)
>
>>
>> + channels.num_channels = 1;
>> + channels.channels[0].channel_name = vr->wayland_socket_path;
>> + channels.channels[0].channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
>> + builder.channels = &channels;
>> + }
>> +
>> + result = rutabaga_init(&builder, &vr->rutabaga);
>> + if (builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) {
>> + free(channels.channels);
>
>
> g_free() (after switching to g_new)
>
>>
>> + }
>> +
>> + memset(&memory_regions, 0, MAX_SLOTS * sizeof(struct MemoryRegionInfo));
>> + vr->ctx = qemu_get_aio_context();
>> + return result;
>> +}
>> +
>> +static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
>> +{
>> + int result;
>> + uint32_t num_capsets;
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> +
>> + if (!vr->rutabaga_active) {
>> + result = virtio_gpu_rutabaga_init(g);
>> + if (result) {
>> + error_report("Failed to init rutabaga");
>> + return 0;
>> + }
>> +
>> + vr->rutabaga_active = true;
>> + }
>> +
>> + result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
>> + if (result) {
>> + error_report("Failed to get capsets");
>> + return 0;
>> + }
>> + vr->num_capsets = num_capsets;
>> + return num_capsets;
>> +}
>> +
>> +static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>> +{
>> + VirtIOGPU *g = VIRTIO_GPU(vdev);
>> + VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>> + struct virtio_gpu_ctrl_command *cmd;
>> +
>> + if (!virtio_queue_ready(vq)) {
>> + return;
>> + }
>> +
>> + if (!vr->rutabaga_active) {
>> + int result = virtio_gpu_rutabaga_init(g);
>> + if (!result) {
>> + vr->rutabaga_active = true;
>> + }
>> + }
>> +
>> + if (!vr->rutabaga_active) {
>> + return;
>> + }
>> +
>> + cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>> + while (cmd) {
>> + cmd->vq = vq;
>> + cmd->error = 0;
>> + cmd->finished = false;
>> + QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
>> + cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>> + }
>> +
>> + virtio_gpu_process_cmdq(g);
>> +}
>> +
>> +static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
>> +{
>> + int num_capsets;
>> + VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
>> + VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
>> +
>
>
> It would be simpler to call virtio_gpu_rutabaga_init() here, with Error argument etc, instead of indirectly from other places.
>
>> + num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
>> + if (!num_capsets) {
>> + return;
>> + }
>> +
>> +#if HOST_BIG_ENDIAN
>> + error_setg(errp, "rutabaga is not supported on bigendian platforms");
>> + return;
>> +#endif
>> +
>> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
>> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
>> + bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
>> +
>> + bdev->virtio_config.num_capsets = num_capsets;
>> + virtio_gpu_device_realize(qdev, errp);
>> +}
>> +
>> +static Property virtio_gpu_rutabaga_properties[] = {
>> + DEFINE_PROP_STRING("capset_names", VirtioGpuRutabaga, capset_names),
>> + DEFINE_PROP_STRING("wayland_socket_path", VirtioGpuRutabaga,
>> + wayland_socket_path),
>> + DEFINE_PROP_STRING("wsi", VirtioGpuRutabaga, wsi),
>> + DEFINE_PROP_END_OF_LIST(),
>> +};
>> +
>> +static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
>> +{
>> + DeviceClass *dc = DEVICE_CLASS(klass);
>> + VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
>> + VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
>> + VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
>> +
>> + vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
>> + vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
>> + vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
>> + vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
>> +
>> + vdc->realize = virtio_gpu_rutabaga_realize;
>> + device_class_set_props(dc, virtio_gpu_rutabaga_properties);
>> +}
>> +
>> +static const TypeInfo virtio_gpu_rutabaga_info = {
>> + .name = TYPE_VIRTIO_GPU_RUTABAGA,
>> + .parent = TYPE_VIRTIO_GPU,
>> + .instance_size = sizeof(VirtioGpuRutabaga),
>> + .class_init = virtio_gpu_rutabaga_class_init,
>> +};
>> +module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
>> +module_kconfig(VIRTIO_GPU);
>> +
>> +static void virtio_register_types(void)
>> +{
>> + type_register_static(&virtio_gpu_rutabaga_info);
>> +}
>> +
>> +type_init(virtio_register_types)
>> +
>> +module_dep("hw-display-virtio-gpu");
>> diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
>> new file mode 100644
>> index 0000000000..01831bd03f
>> --- /dev/null
>> +++ b/hw/display/virtio-vga-rutabaga.c
>> @@ -0,0 +1,52 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +
>> +#include "qemu/osdep.h"
>> +#include "hw/pci/pci.h"
>> +#include "hw/qdev-properties.h"
>> +#include "hw/virtio/virtio-gpu.h"
>> +#include "hw/display/vga.h"
>> +#include "qapi/error.h"
>> +#include "qemu/module.h"
>> +#include "virtio-vga.h"
>> +#include "qom/object.h"
>> +
>> +#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
>> +
>> +typedef struct VirtIOVGARUTABAGA VirtIOVGARUTABAGA;
>> +DECLARE_INSTANCE_CHECKER(VirtIOVGARUTABAGA, VIRTIO_VGA_RUTABAGA,
>> + TYPE_VIRTIO_VGA_RUTABAGA)
>> +
>> +struct VirtIOVGARUTABAGA {
>> + VirtIOVGABase parent_obj;
>> +
>> + VirtioGpuRutabaga vdev;
>> +};
>> +
>> +static void virtio_vga_rutabaga_inst_initfn(Object *obj)
>> +{
>> + VirtIOVGARUTABAGA *dev = VIRTIO_VGA_RUTABAGA(obj);
>> +
>> + virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>> + TYPE_VIRTIO_GPU_RUTABAGA);
>> + VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>> +}
>> +
>> +static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
>> + .generic_name = TYPE_VIRTIO_VGA_RUTABAGA,
>> + .parent = TYPE_VIRTIO_VGA_BASE,
>> + .instance_size = sizeof(VirtIOVGARUTABAGA),
>> + .instance_init = virtio_vga_rutabaga_inst_initfn,
>> +};
>> +module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
>> +module_kconfig(VIRTIO_VGA);
>> +
>> +static void virtio_vga_register_types(void)
>> +{
>> + if (have_vga) {
>> + virtio_pci_types_register(&virtio_vga_rutabaga_info);
>> + }
>> +}
>> +
>> +type_init(virtio_vga_register_types)
>> +
>> +module_dep("hw-display-virtio-vga");
>> --
>> 2.41.0.255.g8b1d071c50-goog
>>
>>
>
>
> --
> Marc-André Lureau
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation
2023-07-12 21:40 ` Akihiko Odaki
@ 2023-07-13 1:28 ` Gurchetan Singh
0 siblings, 0 replies; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-13 1:28 UTC (permalink / raw)
To: Akihiko Odaki
Cc: qemu-devel, kraxel, marcandre.lureau, dmitry.osipenko, ray.huang,
alex.bennee, shentey
On Wed, Jul 12, 2023 at 2:40 PM Akihiko Odaki <akihiko.odaki@gmail.com> wrote:
>
> On 2023/07/11 11:56, Gurchetan Singh wrote:
> > This adds basic documentation for virtio-gpu.
>
> Thank you for adding documentation for other backends too. I have been
> asked how virtio-gpu works so many times and always had to explain by
> myself though Gerd does have a nice article.* This documentation will help.
>
> * https://www.kraxel.org/blog/2021/05/virtio-gpu-qemu-graphics-update/
>
> >
> > Suggested-by: Akihiko Odaki <akihiko.odaki@daynix.com>
> > Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> > ---
> > docs/system/device-emulation.rst | 1 +
> > docs/system/devices/virtio-gpu.rst | 80 ++++++++++++++++++++++++++++++
> > 2 files changed, 81 insertions(+)
> > create mode 100644 docs/system/devices/virtio-gpu.rst
> >
> > diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
> > index 4491c4cbf7..1167f3a9f2 100644
> > --- a/docs/system/device-emulation.rst
> > +++ b/docs/system/device-emulation.rst
> > @@ -91,6 +91,7 @@ Emulated Devices
> > devices/nvme.rst
> > devices/usb.rst
> > devices/vhost-user.rst
> > + devices/virtio-gpu.rst
> > devices/virtio-pmem.rst
> > devices/vhost-user-rng.rst
> > devices/canokey.rst
> > diff --git a/docs/system/devices/virtio-gpu.rst b/docs/system/devices/virtio-gpu.rst
> > new file mode 100644
> > index 0000000000..2426039540
> > --- /dev/null
> > +++ b/docs/system/devices/virtio-gpu.rst
> > @@ -0,0 +1,80 @@
> > +..
> > + SPDX-License-Identifier: GPL-2.0
> > +
> > +virtio-gpu
> > +==========
> > +
> > +This document explains the setup and usage of the virtio-gpu device.
> > +The virtio-gpu device paravirtualizes the GPU and display controller.
> > +
> > +Linux kernel support
> > +--------------------
> > +
> > +virtio-gpu requires a guest Linux kernel built with the
> > +``CONFIG_DRM_VIRTIO_GPU`` option.
> > +
> > +QEMU virtio-gpu variants
> > +------------------------
> > +
> > +There are many virtio-gpu device variants, listed below:
> > +
> > + * ``virtio-vga``
> > + * ``virtio-gpu-pci``
> > + * ``virtio-vga-gl``
> > + * ``virtio-gpu-gl-pci``
> > + * ``virtio-vga-rutabaga``
> > + * ``virtio-gpu-rutabaga-pci``
> > + * ``vhost-user-vga``
> > + * ``vhost-user-gl-pci``
>
> > +
> > +QEMU provides a 2D virtio-gpu backend, and two accelerated backends:
> > +virglrenderer ('gl' device label) and rutabaga_gfx ('rutabaga' device
> > +label). There is also a vhost-user backend that runs the 2D device > +in a separate process. Each device type as VGA or PCI variant. This
> > +document uses the PCI variant in examples.
>
> I suggest to replace "2D device" with "graphics stack"; vhost-user works
> with 3D too. It's also slightly awkward to say a device runs in a
> separate process as some portion of device emulation always stuck in
> QEMU. In my opinion, the point of vhost-user backend is to isolate the
> gigantic graphics stack so let's put this phrase.
>
> I also have a bit different understanding regarding virtio-gpu variants.
> First, the variants can be classified into VGA and non-VGA ones. The VGA
> ones are prefixed with virtio-vga or vhost-user-vga while the non-VGA
> ones are prefixed with virtio-gpu or vhost-user-gpu.
>
> The VGA ones always use PCI interface, but for the non-VGA ones, you can
> further pick simple MMIO or PCI. For MMIO, you can suffix the device
> name with -device though vhost-user-gpu apparently does not support
> MMIO. For PCI, you can suffix it with -pci. Without these suffixes, the
> platform default will be chosen.
>
> Since enumerating all variants will result in a long list, you may
> provide abstract syntaxes like the following for this explanation:
>
> * virtio-vga[-BACKEND]
> * virtio-gpu[-BACKEND][-INTERFACE]
> * vhost-user-vga
> * vhost-user-pci
>
> > +
> > +virtio-gpu 2d
> > +-------------
> > +
> > +The default 2D mode uses a guest software renderer (llvmpipe, lavapipe,
> > +Swiftshader) to provide the OpenGL/Vulkan implementations.
>
> It's certainly possible to use virtio-gpu without software
> OpenGL/Vulkan. A major example is Windows; its software renderer is
> somewhat limited in my understanding.
>
> My suggestion:
> The default 2D backend only performs 2D operations. The guest needs to
> employ a software renderer for 3D graphics.
>
> It's also better to provide links for the renderers. Apparently lavapipe
> does not have a dedicated documentation, so you may add a link for Mesa
> and mention them like:
> LLVMpipe and Lavapipe included in `Mesa`_, or `SwiftShader`_
>
> And I think it will be helpful to say LLVMpipe and Lavapipe work out of
> box on typical modern Linux distributions as that should be what people
> care.
>
> > +
> > +.. parsed-literal::
> > + -device virtio-gpu-pci
> > +
> > +virtio-gpu virglrenderer
> > +------------------------
> > +
> > +When using virgl accelerated graphics mode, OpenGL API calls are translated
> > +into an intermediate representation (see `Gallium3D`_). The intermediate
> > +representation is communicated to the host and the `virglrenderer`_ library
> > +on the host translates the intermediate representation back to OpenGL API
> > +calls.
> It should be mentioned that the translation occurs in the guest side,
> and the guest side component is included in Linux distributions as like
> LLVMpipe and Lavapipe are.
>
> > +
> > +.. parsed-literal::
> > + -device virtio-gpu-gl-pci
> > +
> > +.. _Gallium3D: https://www.freedesktop.org/wiki/Software/gallium/
> > +.. _virglrenderer: https://gitlab.freedesktop.org/virgl/virglrenderer/
> > +
> > +virtio-gpu rutabaga
> > +-------------------
> > +
> > +virtio-gpu can also leverage `rutabaga_gfx`_ to provide `gfxstream`_ rendering
> > +and `Wayland display passthrough`_. With the gfxstream rendering mode, GLES
> > +and Vulkan calls are forwarded directly to the host with minimal modification.
>
> I find the description included in the PDF you posted on GitLab* quite a
> useful so I suggest to incorporate its content.
>
> You may omit the overall design diagram as it mentions guest side and
> Rutabaga details and crosvm and may be confusing for QEMU users.
>
> The detailed commands for building dependencies may also be omitted and
> instead point to the documentation of respective projects as they should
> be subject to future changes.
>
> It's unfortunate that rutabaga_gfx and goldfish-opengl do not come with
> proper documentations (and I wonder rutabaga_gfx still need a hack
> mentioned in the PDF). For now the procedure to build them should be
> included in the documentation since it will take hours to figure out for
> a first-time reader otherwise.
>
> *
> https://gitlab.com/qemu-project/qemu/uploads/f960580bf0f19077e0330960b4a3152e/gfxstream_+_QEMU_setup__public_.pdf
The new doc in https://gitlab.com/qemu-project/qemu/-/issues/1611#note_1464562962
doesn't require the hack patch. I'll incorporate your other
suggestions in v2.
> > +
> > +Please refer the `crosvm book`_ on how to setup the guest for Wayland
> > +passthrough (QEMU uses the same implementation).
> > +
> > +This device does require host blob support (``hostmem`` field below), but not
> > +all capsets (``capset_names`` below) have to enabled when starting the device.
> > +
> > +.. parsed-literal::
> > + -device virtio-gpu-rutabaga-pci,capset_names=gfxstream-vulkan:cross-domain,\\
> > + hostmem=8G,wayland_socket_path="$XDG_RUNTIME_DIR/$WAYLAND_DISPLAY"
> > +
> > +.. _rutabaga_gfx: https://github.com/google/crosvm/blob/main/rutabaga_gfx/ffi/src/include/rutabaga_gfx_ffi.h
> > +.. _gfxstream: https://android.googlesource.com/platform/hardware/google/gfxstream/
> > +.. _Wayland display passthrough: https://www.youtube.com/watch?v=OZJiHMtIQ2M
> > +.. _crosvm book: https://crosvm.dev/book/devices/wayland.html
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream
2023-07-11 2:56 ` [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
2023-07-12 12:31 ` Akihiko Odaki
2023-07-12 19:14 ` Marc-André Lureau
@ 2023-07-15 19:58 ` Bernhard Beschow
2 siblings, 0 replies; 22+ messages in thread
From: Bernhard Beschow @ 2023-07-15 19:58 UTC (permalink / raw)
To: Gurchetan Singh, qemu-devel
Cc: --cc=kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee
Am 11. Juli 2023 02:56:46 UTC schrieb Gurchetan Singh <gurchetansingh@chromium.org>:
>This adds initial support for gfxstream and cross-domain. Both
>features rely on virtio-gpu blob resources and context types, which
>are also implemented in this patch.
>
>gfxstream has a long and illustrious history in Android graphics
>paravirtualization. It has been powering graphics in the Android
>Studio Emulator for more than a decade, which is the main developer
>platform.
>
>Originally conceived by Jesse Hall, it was first known as "EmuGL" [a].
>The key design characteristic was a 1:1 threading model and
>auto-generation, which fit nicely with the OpenGLES spec. It also
>allowed easy layering with ANGLE on the host, which provides the GLES
>implementations on Windows or MacOS enviroments.
>
>gfxstream has traditionally been maintained by a single engineer, and
>between 2015 to 2021, the goldfish throne passed to Frank Yang.
>Historians often remark this glorious reign ("pax gfxstreama" is the
>academic term) was comparable to that of Augustus and the both Queen
>Elizabeths. Just to name a few accomplishments in a resplendent
>panoply: higher versions of GLES, address space graphics, snapshot
>support and CTS compliant Vulkan [b].
>
>One major drawback was the use of out-of-tree goldfish drivers.
>Android engineers didn't know much about DRM/KMS and especially TTM so
>a simple guest to host pipe was conceived.
>
>Luckily, virtio-gpu 3D started to emerge in 2016 due to the work of
>the Mesa/virglrenderer communities. In 2018, the initial virtio-gpu
>port of gfxstream was done by Cuttlefish enthusiast Alistair Delva.
>It was a symbol compatible replacement of virglrenderer [c] and named
>"AVDVirglrenderer". This implementation forms the basis of the
>current gfxstream host implementation still in use today.
>
>cross-domain support follows a similar arc. Originally conceived by
>Wayland aficionado David Reveman and crosvm enjoyer Zach Reizner in
>2018, it initially relied on the downstream "virtio-wl" device.
>
>In 2020 and 2021, virtio-gpu was extended to include blob resources
>and multiple timelines by yours truly, features gfxstream/cross-domain
>both require to function correctly.
>
>Right now, we stand at the precipice of a truly fantastic possibility:
>the Android Emulator powered by upstream QEMU and upstream Linux
>kernel. gfxstream will then be packaged properfully, and app
>developers can even fix gfxstream bugs on their own if they encounter
>them.
>
>It's been quite the ride, my friends. Where will gfxstream head next,
>nobody really knows. I wouldn't be surprised if it's around for
>another decade, maintained by a new generation of Android graphics
>enthusiasts.
AFAIU gfxstream is a substitute for virglrenderer and relies on an auto-generated interface based on OpenGL/Vulkan between host and guest. I would like to use it in QEMU (Windows host, Linux guest).
So I tried to test your series under Linux (for now). For now, I couldn't get past the point of aborts with generic error messages or no error messages with blank screens. Though my Linux host might not provide a recent enough environment.
Read on for some technical reviews below.
>
>Technical details:
> - Very simple initial display integration: just used Pixman
> - Largely, 1:1 mapping of virtio-gpu hypercalls to rutabaga function
> calls
>
>[a] https://android-review.googlesource.com/c/platform/development/+/34470
>[b] https://android-review.googlesource.com/q/topic:%22vulkan-hostconnection-start%22
>[c] https://android-review.googlesource.com/c/device/generic/goldfish-opengl/+/761927
>
>Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
>---
>v2: Incorported various suggestions by Akihiko Odaki and Bernard Berschow
> - Removed GET_VIRTIO_GPU_GL / GET_RUTABAGA macros
> - Used error_report(..)
> - Used g_autofree to fix leaks on error paths
> - Removed unnecessary casts
> - added virtio-gpu-pci-rutabaga.c + virtio-vga-rutabaga.c files
>
> hw/display/virtio-gpu-pci-rutabaga.c | 48 ++
> hw/display/virtio-gpu-rutabaga.c | 1088 ++++++++++++++++++++++++++
> hw/display/virtio-vga-rutabaga.c | 52 ++
> 3 files changed, 1188 insertions(+)
> create mode 100644 hw/display/virtio-gpu-pci-rutabaga.c
> create mode 100644 hw/display/virtio-gpu-rutabaga.c
> create mode 100644 hw/display/virtio-vga-rutabaga.c
>
>diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
>new file mode 100644
>index 0000000000..5765bef266
>--- /dev/null
>+++ b/hw/display/virtio-gpu-pci-rutabaga.c
>@@ -0,0 +1,48 @@
>+// SPDX-License-Identifier: GPL-2.0
>+
>+#include "qemu/osdep.h"
>+#include "qapi/error.h"
>+#include "qemu/module.h"
>+#include "hw/pci/pci.h"
>+#include "hw/qdev-properties.h"
>+#include "hw/virtio/virtio.h"
>+#include "hw/virtio/virtio-bus.h"
>+#include "hw/virtio/virtio-gpu-pci.h"
>+#include "qom/object.h"
>+
>+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
>+typedef struct VirtIOGPURUTABAGAPCI VirtIOGPURUTABAGAPCI;
>+DECLARE_INSTANCE_CHECKER(VirtIOGPURUTABAGAPCI, VIRTIO_GPU_RUTABAGA_PCI,
>+ TYPE_VIRTIO_GPU_RUTABAGA_PCI)
>+
>+struct VirtIOGPURUTABAGAPCI {
>+ VirtIOGPUPCIBase parent_obj;
>+ VirtioGpuRutabaga vdev;
>+};
>+
>+static void virtio_gpu_rutabaga_initfn(Object *obj)
>+{
>+ VirtIOGPURUTABAGAPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
>+
>+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>+ TYPE_VIRTIO_GPU_RUTABAGA);
>+ VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>+}
>+
>+static const VirtioPCIDeviceTypeInfo virtio_gpu_rutabaga_pci_info = {
>+ .generic_name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
>+ .parent = TYPE_VIRTIO_GPU_PCI_BASE,
>+ .instance_size = sizeof(VirtIOGPURUTABAGAPCI),
>+ .instance_init = virtio_gpu_rutabaga_initfn,
>+};
>+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
>+module_kconfig(VIRTIO_PCI);
>+
>+static void virtio_gpu_rutabaga_pci_register_types(void)
>+{
>+ virtio_pci_types_register(&virtio_gpu_rutabaga_pci_info);
>+}
>+
>+type_init(virtio_gpu_rutabaga_pci_register_types)
>+
>+module_dep("hw-display-virtio-gpu-pci");
>diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
>new file mode 100644
>index 0000000000..b60a30a093
>--- /dev/null
>+++ b/hw/display/virtio-gpu-rutabaga.c
>@@ -0,0 +1,1088 @@
>+// SPDX-License-Identifier: GPL-2.0
>+
>+#include "qemu/osdep.h"
>+#include "qemu/error-report.h"
>+#include "qemu/iov.h"
>+#include "trace.h"
>+#include "hw/virtio/virtio.h"
>+#include "hw/virtio/virtio-gpu.h"
>+#include "hw/virtio/virtio-gpu-pixman.h"
>+#include "hw/virtio/virtio-iommu.h"
>+
>+#include <glib/gmem.h>
>+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
>+
>+#define CHECK(condition, cmd) \
>+ do { \
>+ if (!condition) { \
>+ error_report("CHECK failed in %s() %s:" "%d", __func__, \
>+ __FILE__, __LINE__); \
>+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC; \
>+ return; \
>+ } \
>+ } while (0)
>+
>+#define CHECK_RESULT(result, cmd) CHECK(result == 0, cmd)
>+
>+#define MAX_SLOTS 4096
>+
>+struct MemoryRegionInfo {
>+ int used;
>+ MemoryRegion mr;
>+ uint32_t resource_id;
>+};
>+
>+static struct MemoryRegionInfo memory_regions[MAX_SLOTS];
>+
>+struct rutabaga_aio_data {
>+ struct VirtioGpuRutabaga *vr;
>+ struct rutabaga_fence fence;
>+};
>+
>+static void
>+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
>+ uint32_t resource_id)
>+{
>+ struct virtio_gpu_simple_resource *res;
>+ struct rutabaga_transfer transfer = { 0 };
>+ struct iovec transfer_iovec;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ res = virtio_gpu_find_resource(g, resource_id);
>+ if (!res) {
>+ return;
>+ }
>+
>+ if (res->width != s->current_cursor->width ||
>+ res->height != s->current_cursor->height) {
>+ return;
>+ }
>+
>+ transfer.x = 0;
>+ transfer.y = 0;
>+ transfer.z = 0;
>+ transfer.w = res->width;
>+ transfer.h = res->height;
>+ transfer.d = 1;
>+
>+ transfer_iovec.iov_base = (void *)s->current_cursor->data;
>+ transfer_iovec.iov_len = res->width * res->height * 4;
>+
>+ rutabaga_resource_transfer_read(vr->rutabaga, 0,
>+ resource_id, &transfer,
>+ &transfer_iovec);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
>+{
>+ VirtIOGPU *g = VIRTIO_GPU(b);
>+ virtio_gpu_process_cmdq(g);
>+}
>+
>+static void
>+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct rutabaga_create_3d rc_3d = { 0 };
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_resource_create_2d c2d;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(c2d);
>+ trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
>+ c2d.width, c2d.height);
>+
>+ rc_3d.target = 2;
>+ rc_3d.format = c2d.format;
>+ rc_3d.bind = (1 << 1);
>+ rc_3d.width = c2d.width;
>+ rc_3d.height = c2d.height;
>+ rc_3d.depth = 1;
>+ rc_3d.array_size = 1;
>+ rc_3d.last_level = 0;
>+ rc_3d.nr_samples = 0;
>+ rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
>+
>+ result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
>+ CHECK_RESULT(result, cmd);
>+
>+ res = g_new0(struct virtio_gpu_simple_resource, 1);
>+ res->width = c2d.width;
>+ res->height = c2d.height;
>+ res->format = c2d.format;
>+ res->resource_id = c2d.resource_id;
>+
>+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>+}
>+
>+static void
>+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct rutabaga_create_3d rc_3d = { 0 };
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_resource_create_3d c3d;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(c3d);
>+
>+ trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
>+ c3d.width, c3d.height, c3d.depth);
>+
>+ rc_3d.target = c3d.target;
>+ rc_3d.format = c3d.format;
>+ rc_3d.bind = c3d.bind;
>+ rc_3d.width = c3d.width;
>+ rc_3d.height = c3d.height;
>+ rc_3d.depth = c3d.depth;
>+ rc_3d.array_size = c3d.array_size;
>+ rc_3d.last_level = c3d.last_level;
>+ rc_3d.nr_samples = c3d.nr_samples;
>+ rc_3d.flags = c3d.flags;
>+
>+ result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
>+ CHECK_RESULT(result, cmd);
>+
>+ res = g_new0(struct virtio_gpu_simple_resource, 1);
>+ res->width = c3d.width;
>+ res->height = c3d.height;
>+ res->format = c3d.format;
>+ res->resource_id = c3d.resource_id;
>+
>+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
>+}
>+
>+static void
>+rutabaga_cmd_resource_unref(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_resource_unref unref;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(unref);
>+
>+ trace_virtio_gpu_cmd_res_unref(unref.resource_id);
>+
>+ res = virtio_gpu_find_resource(g, unref.resource_id);
>+ CHECK(res, cmd);
>+
>+ result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
>+ CHECK_RESULT(result, cmd);
>+
>+ if (res->image) {
>+ pixman_image_unref(res->image);
>+ }
>+
>+ QTAILQ_REMOVE(&g->reslist, res, next);
>+ g_free(res);
>+}
>+
>+static void
>+rutabaga_cmd_context_create(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_ctx_create cc;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(cc);
>+ trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
>+ cc.debug_name);
>+
>+ result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
>+ cc.context_init, cc.debug_name, cc.nlen);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_context_destroy(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_ctx_destroy cd;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(cd);
>+ trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
>+
>+ result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result, i;
>+ struct virtio_gpu_scanout *scanout = NULL;
>+ struct virtio_gpu_simple_resource *res;
>+ struct rutabaga_transfer transfer = { 0 };
>+ struct iovec transfer_iovec;
>+ struct virtio_gpu_resource_flush rf;
>+ bool found = false;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+ if (vr->headless) {
>+ return;
>+ }
>+
>+ VIRTIO_GPU_FILL_CMD(rf);
>+ trace_virtio_gpu_cmd_res_flush(rf.resource_id,
>+ rf.r.width, rf.r.height, rf.r.x, rf.r.y);
>+
>+ res = virtio_gpu_find_resource(g, rf.resource_id);
>+ CHECK(res, cmd);
>+
>+ for (i = 0; i < g->parent_obj.conf.max_outputs; i++) {
>+ scanout = &g->parent_obj.scanout[i];
>+ if (i == res->scanout_bitmask) {
>+ found = true;
>+ break;
>+ }
>+ }
>+
>+ if (!found) {
>+ return;
>+ }
>+
>+ transfer.x = 0;
>+ transfer.y = 0;
>+ transfer.z = 0;
>+ transfer.w = res->width;
>+ transfer.h = res->height;
>+ transfer.d = 1;
>+
>+ transfer_iovec.iov_base = (void *)pixman_image_get_data(res->image);
>+ transfer_iovec.iov_len = res->width * res->height * 4;
>+
>+ result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
>+ rf.resource_id, &transfer,
>+ &transfer_iovec);
>+ CHECK_RESULT(result, cmd);
>+ dpy_gfx_update_full(scanout->con);
>+}
>+
>+static void
>+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_scanout *scanout = NULL;
>+ struct virtio_gpu_set_scanout ss;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+ if (vr->headless) {
>+ return;
>+ }
>+
>+ VIRTIO_GPU_FILL_CMD(ss);
>+ trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
>+ ss.r.width, ss.r.height, ss.r.x, ss.r.y);
>+
>+ scanout = &g->parent_obj.scanout[ss.scanout_id];
>+ g->parent_obj.enable = 1;
>+
>+ if (ss.resource_id == 0) {
>+ return;
>+ }
>+
>+ res = virtio_gpu_find_resource(g, ss.resource_id);
>+ CHECK(res, cmd);
>+
>+ if (!res->image) {
>+ pixman_format_code_t pformat;
>+ pformat = virtio_gpu_get_pixman_format(res->format);
>+ CHECK(pformat, cmd);
>+
>+ res->image = pixman_image_create_bits(pformat,
>+ res->width,
>+ res->height,
>+ NULL, 0);
>+ CHECK(res->image, cmd);
>+ pixman_image_ref(res->image);
>+ }
>+
>+ /* realloc the surface ptr */
>+ scanout->ds = qemu_create_displaysurface_pixman(res->image);
>+ dpy_gfx_replace_surface(scanout->con, NULL);
>+ dpy_gfx_replace_surface(scanout->con, scanout->ds);
>+ res->scanout_bitmask = ss.scanout_id;
>+}
>+
>+static void
>+rutabaga_cmd_submit_3d(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_cmd_submit cs;
>+ g_autofree uint8_t *buf = NULL;
>+ size_t s;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(cs);
>+ trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
>+
>+ buf = g_new0(uint8_t, cs.size);
>+ s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
>+ sizeof(cs), buf, cs.size);
>+ CHECK((s == cs.size), cmd);
>+
>+ result = rutabaga_submit_command(vr->rutabaga, cs.hdr.ctx_id, buf, cs.size);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct rutabaga_transfer transfer = { 0 };
>+ struct virtio_gpu_transfer_to_host_2d t2d;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(t2d);
>+ trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
>+
>+ transfer.x = t2d.r.x;
>+ transfer.y = t2d.r.y;
>+ transfer.z = 0;
>+ transfer.w = t2d.r.width;
>+ transfer.h = t2d.r.height;
>+ transfer.d = 1;
>+
>+ result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
>+ &transfer);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct rutabaga_transfer transfer = { 0 };
>+ struct virtio_gpu_transfer_host_3d t3d;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(t3d);
>+ trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
>+
>+ transfer.x = t3d.box.x;
>+ transfer.y = t3d.box.y;
>+ transfer.z = t3d.box.z;
>+ transfer.w = t3d.box.w;
>+ transfer.h = t3d.box.h;
>+ transfer.d = t3d.box.d;
>+ transfer.level = t3d.level;
>+ transfer.stride = t3d.stride;
>+ transfer.layer_stride = t3d.layer_stride;
>+ transfer.offset = t3d.offset;
>+
>+ result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
>+ t3d.resource_id, &transfer);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct rutabaga_transfer transfer = { 0 };
>+ struct virtio_gpu_transfer_host_3d t3d;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(t3d);
>+ trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
>+
>+ transfer.x = t3d.box.x;
>+ transfer.y = t3d.box.y;
>+ transfer.z = t3d.box.z;
>+ transfer.w = t3d.box.w;
>+ transfer.h = t3d.box.h;
>+ transfer.d = t3d.box.d;
>+ transfer.level = t3d.level;
>+ transfer.stride = t3d.stride;
>+ transfer.layer_stride = t3d.layer_stride;
>+ transfer.offset = t3d.offset;
>+
>+ result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
>+ t3d.resource_id, &transfer, NULL);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+ struct rutabaga_iovecs vecs = { 0 };
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_resource_attach_backing att_rb;
>+ struct iovec *res_iovs;
>+ uint32_t res_niov;
>+ int ret;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(att_rb);
>+ trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
>+
>+ res = virtio_gpu_find_resource(g, att_rb.resource_id);
>+ CHECK(res, cmd);
>+ CHECK(!res->iov, cmd);
>+
>+ ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
>+ cmd, NULL, &res_iovs, &res_niov);
>+ CHECK_RESULT(ret, cmd);
>+
>+ vecs.iovecs = res_iovs;
>+ vecs.num_iovecs = res_niov;
>+
>+ ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
>+ &vecs);
>+ if (ret != 0) {
>+ virtio_gpu_cleanup_mapping_iov(g, res_iovs, res_niov);
>+ }
>+}
>+
>+static void
>+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_resource_detach_backing detach_rb;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(detach_rb);
>+ trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
>+
>+ res = virtio_gpu_find_resource(g, detach_rb.resource_id);
>+ CHECK(res, cmd);
>+
>+ rutabaga_resource_detach_backing(vr->rutabaga,
>+ detach_rb.resource_id);
>+
>+ virtio_gpu_cleanup_mapping(g, res);
>+}
>+
>+static void
>+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_ctx_resource att_res;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(att_res);
>+ trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
>+ att_res.resource_id);
>+
>+ result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
>+ att_res.resource_id);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_ctx_resource det_res;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(det_res);
>+ trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
>+ det_res.resource_id);
>+
>+ result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
>+ det_res.resource_id);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_get_capset_info info;
>+ struct virtio_gpu_resp_capset_info resp;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(info);
>+
>+ result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
>+ &resp.capset_id, &resp.capset_max_version,
>+ &resp.capset_max_size);
>+ CHECK_RESULT(result, cmd);
>+
>+ resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
>+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>+}
>+
>+static void
>+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ struct virtio_gpu_get_capset gc;
>+ struct virtio_gpu_resp_capset *resp;
>+ uint32_t capset_size;
>+ uint32_t current_id;
>+ bool found = false;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(gc);
>+ for (uint32_t i = 0; i < vr->num_capsets; i++) {
>+ result = rutabaga_get_capset_info(vr->rutabaga, i,
>+ ¤t_id, &capset_size,
>+ &capset_size);
>+ CHECK_RESULT(result, cmd);
>+
>+ if (current_id == gc.capset_id) {
>+ found = true;
>+ break;
>+ }
>+ }
>+
>+ if (!found) {
>+ error_report("capset not found!");
>+ return;
>+ }
>+
>+ resp = g_malloc0(sizeof(*resp) + capset_size);
>+ resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
>+ rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
>+ (uint8_t *)resp->capset_data, capset_size);
>+
>+ virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
>+ g_free(resp);
>+}
>+
>+static void
>+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int result;
>+ struct rutabaga_iovecs vecs = { 0 };
>+ g_autofree struct virtio_gpu_simple_resource *res = NULL;
>+ struct virtio_gpu_simple_resource *resource;
>+ struct virtio_gpu_resource_create_blob cblob;
>+ struct rutabaga_create_blob rc_blob = { 0 };
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(cblob);
>+ trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
>+
>+ CHECK(cblob.resource_id != 0, cmd);
>+
>+ res = g_new0(struct virtio_gpu_simple_resource, 1);
>+
>+ res->resource_id = cblob.resource_id;
>+ res->blob_size = cblob.size;
>+
>+ if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
>+ result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
>+ sizeof(cblob), cmd, &res->addrs,
>+ &res->iov, &res->iov_cnt);
>+ CHECK_RESULT(result, cmd);
>+ }
>+
>+ rc_blob.blob_id = cblob.blob_id;
>+ rc_blob.blob_mem = cblob.blob_mem;
>+ rc_blob.blob_flags = cblob.blob_flags;
>+ rc_blob.size = cblob.size;
>+
>+ vecs.iovecs = res->iov;
>+ vecs.num_iovecs = res->iov_cnt;
>+
>+ result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
>+ cblob.resource_id, &rc_blob, &vecs,
>+ NULL);
>+ CHECK_RESULT(result, cmd);
>+ resource = g_steal_pointer(&res);
>+ QTAILQ_INSERT_HEAD(&g->reslist, resource, next);
>+}
>+
>+static void
>+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ uint32_t slot = 0;
>+ struct virtio_gpu_simple_resource *res;
>+ struct rutabaga_mapping mapping = { 0 };
>+ struct virtio_gpu_resource_map_blob mblob;
>+ struct virtio_gpu_resp_map_info resp;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(mblob);
>+
>+ CHECK(mblob.resource_id != 0, cmd);
>+
>+ res = virtio_gpu_find_resource(g, mblob.resource_id);
>+ CHECK(res, cmd);
>+
>+ result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
>+ CHECK_RESULT(result, cmd);
>+
>+ for (slot = 0; slot < MAX_SLOTS; slot++) {
>+ if (memory_regions[slot].used) {
>+ continue;
>+ }
>+
>+ MemoryRegion *mr = &(memory_regions[slot].mr);
>+ memory_region_init_ram_ptr(mr, NULL, "blob", mapping.size,
>+ (void *)mapping.ptr);
>+ memory_region_add_subregion(&g->parent_obj.hostmem,
>+ mblob.offset, mr);
>+ memory_regions[slot].resource_id = mblob.resource_id;
>+ memory_regions[slot].used = 1;
>+ break;
>+ }
>+
>+ CHECK((slot < MAX_SLOTS), cmd);
>+
>+ memset(&resp, 0, sizeof(resp));
>+ resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
>+ result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
>+ &resp.map_info);
>+
>+ CHECK_RESULT(result, cmd);
>+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
>+}
>+
>+static void
>+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ int32_t result;
>+ uint32_t slot = 0;
>+ struct virtio_gpu_simple_resource *res;
>+ struct virtio_gpu_resource_unmap_blob ublob;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(ublob);
>+
>+ CHECK(ublob.resource_id != 0, cmd);
>+
>+ res = virtio_gpu_find_resource(g, ublob.resource_id);
>+ CHECK(res, cmd);
>+
>+ for (slot = 0; slot < MAX_SLOTS; slot++) {
>+ if (memory_regions[slot].resource_id != ublob.resource_id) {
>+ continue;
>+ }
>+
>+ MemoryRegion *mr = &(memory_regions[slot].mr);
>+ memory_region_del_subregion(&g->parent_obj.hostmem, mr);
>+
>+ memory_regions[slot].resource_id = 0;
>+ memory_regions[slot].used = 0;
>+ break;
>+ }
>+
>+ CHECK((slot < MAX_SLOTS), cmd);
>+ result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
>+ struct virtio_gpu_ctrl_command *cmd)
>+{
>+ struct rutabaga_fence fence = { 0 };
>+ int32_t result;
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
>+
>+ switch (cmd->cmd_hdr.type) {
>+ case VIRTIO_GPU_CMD_CTX_CREATE:
>+ rutabaga_cmd_context_create(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_CTX_DESTROY:
>+ rutabaga_cmd_context_destroy(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
>+ rutabaga_cmd_create_resource_2d(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
>+ rutabaga_cmd_create_resource_3d(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_SUBMIT_3D:
>+ rutabaga_cmd_submit_3d(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
>+ rutabaga_cmd_transfer_to_host_2d(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
>+ rutabaga_cmd_transfer_to_host_3d(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
>+ rutabaga_cmd_transfer_from_host_3d(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
>+ rutabaga_cmd_attach_backing(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
>+ rutabaga_cmd_detach_backing(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_SET_SCANOUT:
>+ rutabaga_cmd_set_scanout(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
>+ rutabaga_cmd_resource_flush(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_UNREF:
>+ rutabaga_cmd_resource_unref(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
>+ rutabaga_cmd_ctx_attach_resource(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
>+ rutabaga_cmd_ctx_detach_resource(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
>+ rutabaga_cmd_get_capset_info(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_GET_CAPSET:
>+ rutabaga_cmd_get_capset(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
>+ virtio_gpu_get_display_info(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_GET_EDID:
>+ virtio_gpu_get_edid(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
>+ rutabaga_cmd_resource_create_blob(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
>+ rutabaga_cmd_resource_map_blob(g, cmd);
>+ break;
>+ case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
>+ rutabaga_cmd_resource_unmap_blob(g, cmd);
>+ break;
>+ default:
>+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
>+ break;
>+ }
>+
>+ if (cmd->finished) {
>+ return;
>+ }
>+ if (cmd->error) {
>+ error_report("%s: ctrl 0x%x, error 0x%x", __func__,
>+ cmd->cmd_hdr.type, cmd->error);
>+ virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
>+ return;
>+ }
>+ if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
>+ virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
>+ return;
>+ }
>+
>+ fence.flags = cmd->cmd_hdr.flags;
>+ fence.ctx_id = cmd->cmd_hdr.ctx_id;
>+ fence.fence_id = cmd->cmd_hdr.fence_id;
>+ fence.ring_idx = cmd->cmd_hdr.ring_idx;
>+
>+ trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
>+
>+ result = rutabaga_create_fence(vr->rutabaga, &fence);
>+ CHECK_RESULT(result, cmd);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_aio_cb(void *opaque)
>+{
>+ struct rutabaga_aio_data *data = (struct rutabaga_aio_data *)opaque;
>+ VirtIOGPU *g = (VirtIOGPU *)data->vr;
>+ struct rutabaga_fence fence_data = data->fence;
>+ struct virtio_gpu_ctrl_command *cmd, *tmp;
>+
>+ uint32_t signaled_ctx_specific = fence_data.flags &
>+ RUTABAGA_FLAG_INFO_RING_IDX;
>+
>+ QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
>+ /*
>+ * Due to context specific timelines.
>+ */
>+ uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
>+ RUTABAGA_FLAG_INFO_RING_IDX;
>+
>+ if (signaled_ctx_specific != target_ctx_specific) {
>+ continue;
>+ }
>+
>+ if (signaled_ctx_specific &&
>+ (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
>+ continue;
>+ }
>+
>+ if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
>+ continue;
>+ }
>+
>+ trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
>+ virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
>+ QTAILQ_REMOVE(&g->fenceq, cmd, next);
>+ g_free(cmd);
>+ }
>+
>+ g_free(data);
>+}
>+
>+static void
>+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
>+ struct rutabaga_fence fence_data) {
>+ struct rutabaga_aio_data *data;
>+ VirtIOGPU *g = (VirtIOGPU *)user_data;
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ /*
>+ * gfxstream and both cross-domain (and even newer versions virglrenderer:
>+ * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
>+ * threads ("callback threads") that are different from the thread that
>+ * processes the command queue ("main thread").
>+ *
>+ * crosvm and other virtio-gpu 1.1implementations enable callback threads
>+ * via locking. However, on QEMU a deadlock is observed if
>+ * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
>+ * from a thread that is not the main thread.
>+ *
>+ * The reason is QEMU's internal locking is designed to work with QEMU
>+ * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
>+ * For now, we can workaround this by scheduling the return of the
>+ * fence descriptors on the main thread.
>+ */
>+
>+ data = g_new0(struct rutabaga_aio_data, 1);
>+ data->vr = vr;
>+ data->fence = fence_data;
>+ aio_bh_schedule_oneshot_full(vr->ctx, virtio_gpu_rutabaga_aio_cb,
>+ (void *)data, "aio");
>+}
>+
>+static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
Rather than returning an errno here which loses interesting error details the idiomatic way of error handling in QEMU would be appending an `Error **errp` argument and return bool. In case of an error this allows for setting `errp` to a comprehensible error message and `return false`, instead of just returning `-EINVAL`.
>+{
>+ int result;
>+ uint64_t capset_mask;
>+ struct rutabaga_channels channels = { 0 };
>+ struct rutabaga_builder builder = { 0 };
>+
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+ vr->rutabaga = NULL;
>+
>+ if (!vr->capset_names) {
Here it could be mentioned that the "capset_names" option is missing. Currently, if one neglects to add this option, the error message is "Failed to init rutabaga" -- which is kind of sparse to guide to a solution.
>+ return -EINVAL;
>+ }
>+
>+ builder.wsi = RUTABAGA_WSI_SURFACELESS;
>+ /*
>+ * Currently, if WSI is specified, the only valid strings are "surfaceless"
>+ * or "headless". Surfaceless doesn't create a native window surface, but
>+ * does copy from the render target to the Pixman buffer if a virtio-gpu
>+ * 2D hypercall is issued. Surfacless is the default.
>+ *
>+ * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
>+ * use case is automated testing environments where there is no need to view
>+ * results.
>+ *
>+ * In the future, more performant virtio-gpu 2D UI integration may be added.
>+ */
>+ if (vr->wsi) {
>+ if (!strcmp(vr->wsi, "surfaceless")) {
>+ vr->headless = false;
>+ } else if (strcmp(vr->wsi, "headless")) {
>+ vr->headless = true;
>+ } else {
Here we could mention the option and its unknown value in an error message. I think the idiomatic way to achieve this in QEMU is to turn this into an enum option. I suppose one would then be able to query for accepted values by setting the option value to '?', making the option self-documenting.
>+ return -EINVAL;
>+ }
>+ }
>+
>+ result = rutabaga_calculate_capset_mask(vr->capset_names, &capset_mask);
How do we know which are the allowed values for the "capset_names" option? It seems that all the magic happens inside rutabaga which makes the option quite intransparent to QEMU and thus to its users. Could we not teach QEMU the allowed values and have it populate capset_mask itself? I think this would make for much better error messages.
Does it then make sense to split this option into multiple ones to prevent contradicting bits to be set?
>+ if (result) {
>+ return result;
>+ }
>+
>+ /*
>+ * rutabaga-0.1.1 is only compiled/tested with gfxstream and cross-domain
>+ * support. Future versions may change this to have more context types if
>+ * there is any interest.
>+ */
>+ if (capset_mask & (BIT(RUTABAGA_CAPSET_VIRGL) |
>+ BIT(RUTABAGA_CAPSET_VIRGL2) |
>+ BIT(RUTABAGA_CAPSET_VENUS) |
>+ BIT(RUTABAGA_CAPSET_DRM))) {
Is this a limitation of QEMU or rutabaga? The above comment suggests it's in the latter, so it should be dealt with there rather than in QEMU.
>+ return -EINVAL;
>+ }
>+
>+ builder.user_data = (uint64_t)(uintptr_t *)(void *)g;
>+ builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
>+ builder.capset_mask = capset_mask;
>+
>+ if (vr->wayland_socket_path) {
>+ if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) == 0) {
>+ return -EINVAL;
>+ }
>+
>+ channels.channels =
>+ (struct rutabaga_channel *)calloc(1, sizeof(struct rutabaga_channel));
>+ channels.num_channels = 1;
>+ channels.channels[0].channel_name = vr->wayland_socket_path;
>+ channels.channels[0].channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
>+ builder.channels = &channels;
>+ }
>+
>+ result = rutabaga_init(&builder, &vr->rutabaga);
So this is plain rutabaga FFI API, just returning some uint32_t in case of an error. How can we communicate the precise error to a QEMU user? I haven't looked, but rutabaga might use Rust's Result type internally which would probably contain a highly descriptive error message. Furthermore, during compilation of rutabaga, I could see the usual suspects of Rust error crates to be compiled. Can we somehow propagate these errors through the FFI layer and eventually convert them to QEMU's Error type?
The above function seems to be just one of many with an error-hiding API. IOW the error handling seems to be rather weak in general. Rutabaga seems to be an abstraction layer over various graphics backends, each of which has its own, special error cases. In order to cater to users -- both QEMU and Android Emulator, I think the error handling could be improved. Fixing it might also cause less support mainteneance in the future.
Best regards,
Bernhard
>+ if (builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN)) {
>+ free(channels.channels);
>+ }
>+
>+ memset(&memory_regions, 0, MAX_SLOTS * sizeof(struct MemoryRegionInfo));
>+ vr->ctx = qemu_get_aio_context();
>+ return result;
>+}
>+
>+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
>+{
>+ int result;
>+ uint32_t num_capsets;
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+
>+ if (!vr->rutabaga_active) {
>+ result = virtio_gpu_rutabaga_init(g);
>+ if (result) {
>+ error_report("Failed to init rutabaga");
>+ return 0;
>+ }
>+
>+ vr->rutabaga_active = true;
>+ }
>+
>+ result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
>+ if (result) {
>+ error_report("Failed to get capsets");
>+ return 0;
>+ }
>+ vr->num_capsets = num_capsets;
>+ return num_capsets;
>+}
>+
>+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
>+{
>+ VirtIOGPU *g = VIRTIO_GPU(vdev);
>+ VirtioGpuRutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
>+ struct virtio_gpu_ctrl_command *cmd;
>+
>+ if (!virtio_queue_ready(vq)) {
>+ return;
>+ }
>+
>+ if (!vr->rutabaga_active) {
>+ int result = virtio_gpu_rutabaga_init(g);
>+ if (!result) {
>+ vr->rutabaga_active = true;
>+ }
>+ }
>+
>+ if (!vr->rutabaga_active) {
>+ return;
>+ }
>+
>+ cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>+ while (cmd) {
>+ cmd->vq = vq;
>+ cmd->error = 0;
>+ cmd->finished = false;
>+ QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
>+ cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
>+ }
>+
>+ virtio_gpu_process_cmdq(g);
>+}
>+
>+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
>+{
>+ int num_capsets;
>+ VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
>+ VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
>+
>+ num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
>+ if (!num_capsets) {
>+ return;
>+ }
>+
>+#if HOST_BIG_ENDIAN
>+ error_setg(errp, "rutabaga is not supported on bigendian platforms");
>+ return;
>+#endif
>+
>+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
>+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
>+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
>+
>+ bdev->virtio_config.num_capsets = num_capsets;
>+ virtio_gpu_device_realize(qdev, errp);
>+}
>+
>+static Property virtio_gpu_rutabaga_properties[] = {
>+ DEFINE_PROP_STRING("capset_names", VirtioGpuRutabaga, capset_names),
>+ DEFINE_PROP_STRING("wayland_socket_path", VirtioGpuRutabaga,
>+ wayland_socket_path),
>+ DEFINE_PROP_STRING("wsi", VirtioGpuRutabaga, wsi),
>+ DEFINE_PROP_END_OF_LIST(),
>+};
>+
>+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
>+{
>+ DeviceClass *dc = DEVICE_CLASS(klass);
>+ VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
>+ VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
>+ VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
>+
>+ vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
>+ vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
>+ vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
>+ vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
>+
>+ vdc->realize = virtio_gpu_rutabaga_realize;
>+ device_class_set_props(dc, virtio_gpu_rutabaga_properties);
>+}
>+
>+static const TypeInfo virtio_gpu_rutabaga_info = {
>+ .name = TYPE_VIRTIO_GPU_RUTABAGA,
>+ .parent = TYPE_VIRTIO_GPU,
>+ .instance_size = sizeof(VirtioGpuRutabaga),
>+ .class_init = virtio_gpu_rutabaga_class_init,
>+};
>+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
>+module_kconfig(VIRTIO_GPU);
>+
>+static void virtio_register_types(void)
>+{
>+ type_register_static(&virtio_gpu_rutabaga_info);
>+}
>+
>+type_init(virtio_register_types)
>+
>+module_dep("hw-display-virtio-gpu");
>diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
>new file mode 100644
>index 0000000000..01831bd03f
>--- /dev/null
>+++ b/hw/display/virtio-vga-rutabaga.c
>@@ -0,0 +1,52 @@
>+// SPDX-License-Identifier: GPL-2.0
>+
>+#include "qemu/osdep.h"
>+#include "hw/pci/pci.h"
>+#include "hw/qdev-properties.h"
>+#include "hw/virtio/virtio-gpu.h"
>+#include "hw/display/vga.h"
>+#include "qapi/error.h"
>+#include "qemu/module.h"
>+#include "virtio-vga.h"
>+#include "qom/object.h"
>+
>+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
>+
>+typedef struct VirtIOVGARUTABAGA VirtIOVGARUTABAGA;
>+DECLARE_INSTANCE_CHECKER(VirtIOVGARUTABAGA, VIRTIO_VGA_RUTABAGA,
>+ TYPE_VIRTIO_VGA_RUTABAGA)
>+
>+struct VirtIOVGARUTABAGA {
>+ VirtIOVGABase parent_obj;
>+
>+ VirtioGpuRutabaga vdev;
>+};
>+
>+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
>+{
>+ VirtIOVGARUTABAGA *dev = VIRTIO_VGA_RUTABAGA(obj);
>+
>+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
>+ TYPE_VIRTIO_GPU_RUTABAGA);
>+ VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
>+}
>+
>+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
>+ .generic_name = TYPE_VIRTIO_VGA_RUTABAGA,
>+ .parent = TYPE_VIRTIO_VGA_BASE,
>+ .instance_size = sizeof(VirtIOVGARUTABAGA),
>+ .instance_init = virtio_vga_rutabaga_inst_initfn,
>+};
>+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
>+module_kconfig(VIRTIO_VGA);
>+
>+static void virtio_vga_register_types(void)
>+{
>+ if (have_vga) {
>+ virtio_pci_types_register(&virtio_vga_rutabaga_info);
>+ }
>+}
>+
>+type_init(virtio_vga_register_types)
>+
>+module_dep("hw-display-virtio-vga");
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 0/9] gfxstream + rutabaga_gfx
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
` (8 preceding siblings ...)
2023-07-11 2:56 ` [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
@ 2023-07-24 9:56 ` Alyssa Ross
2023-07-26 1:10 ` Gurchetan Singh
9 siblings, 1 reply; 22+ messages in thread
From: Alyssa Ross @ 2023-07-24 9:56 UTC (permalink / raw)
To: Gurchetan Singh, qemu-devel
Cc: kraxel, marcandre.lureau, akihiko.odaki, dmitry.osipenko,
ray.huang, alex.bennee, shentey
[-- Attachment #1: Type: text/plain, Size: 332 bytes --]
Gurchetan Singh <gurchetansingh@chromium.org> writes:
> In terms of API stability/versioning/packaging, once this series is
> reviewed, the plan is to cut a "gfxstream upstream release branch". We
> will have the same API guarantees as any other QEMU project then, i.e no
> breaking API changes for 5 years.
What about Rutabaga?
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v1 0/9] gfxstream + rutabaga_gfx
2023-07-24 9:56 ` [PATCH v1 0/9] gfxstream + rutabaga_gfx Alyssa Ross
@ 2023-07-26 1:10 ` Gurchetan Singh
2023-08-01 15:18 ` Rutabaga backwards compatibility Alyssa Ross
0 siblings, 1 reply; 22+ messages in thread
From: Gurchetan Singh @ 2023-07-26 1:10 UTC (permalink / raw)
To: Alyssa Ross
Cc: qemu-devel, kraxel, marcandre.lureau, akihiko.odaki,
dmitry.osipenko, ray.huang, alex.bennee, shentey
On Mon, Jul 24, 2023 at 2:56 AM Alyssa Ross <hi@alyssa.is> wrote:
>
> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>
> > In terms of API stability/versioning/packaging, once this series is
> > reviewed, the plan is to cut a "gfxstream upstream release branch". We
> > will have the same API guarantees as any other QEMU project then, i.e no
> > breaking API changes for 5 years.
>
> What about Rutabaga?
Yes, rutabaga + gfxstream will both be versioned and maintain API
backwards compatibility in line with QEMU guidelines.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Rutabaga backwards compatibility
2023-07-26 1:10 ` Gurchetan Singh
@ 2023-08-01 15:18 ` Alyssa Ross
2023-08-05 1:19 ` Gurchetan Singh
0 siblings, 1 reply; 22+ messages in thread
From: Alyssa Ross @ 2023-08-01 15:18 UTC (permalink / raw)
To: Gurchetan Singh
Cc: qemu-devel, kraxel, marcandre.lureau, akihiko.odaki,
dmitry.osipenko, ray.huang, alex.bennee, shentey, crosvm-dev
[-- Attachment #1: Type: text/plain, Size: 1251 bytes --]
Gurchetan Singh <gurchetansingh@chromium.org> writes:
> On Mon, Jul 24, 2023 at 2:56 AM Alyssa Ross <hi@alyssa.is> wrote:
>>
>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>>
>> > In terms of API stability/versioning/packaging, once this series is
>> > reviewed, the plan is to cut a "gfxstream upstream release branch". We
>> > will have the same API guarantees as any other QEMU project then, i.e no
>> > breaking API changes for 5 years.
>>
>> What about Rutabaga?
>
> Yes, rutabaga + gfxstream will both be versioned and maintain API
> backwards compatibility in line with QEMU guidelines.
In that case, I should draw your attention to
<https://crrev.com/c/4584252>, which I've just realised while testing v2
of your series here breaks the build of the rutabaga ffi, and which will
require the addition of a "prot" field to struct rutabaga_handle (a
breaking change). I'll push a new version of that CL to fix rutabaga
ffi in the next few days.
Since this is already coming up, before the release has even been made,
is it worth exploring how to limit the rutabaga API to avoid more
breaking changes after the release? Could there be more use of opaque
structs, for example?
(CCing the crosvm list)
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Rutabaga backwards compatibility
2023-08-01 15:18 ` Rutabaga backwards compatibility Alyssa Ross
@ 2023-08-05 1:19 ` Gurchetan Singh
2023-08-05 8:47 ` Alyssa Ross
0 siblings, 1 reply; 22+ messages in thread
From: Gurchetan Singh @ 2023-08-05 1:19 UTC (permalink / raw)
To: Alyssa Ross
Cc: qemu-devel, kraxel, marcandre.lureau, akihiko.odaki,
dmitry.osipenko, ray.huang, alex.bennee, shentey, crosvm-dev
[-- Attachment #1: Type: text/plain, Size: 1552 bytes --]
On Tue, Aug 1, 2023 at 8:18 AM Alyssa Ross <hi@alyssa.is> wrote:
> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>
> > On Mon, Jul 24, 2023 at 2:56 AM Alyssa Ross <hi@alyssa.is> wrote:
> >>
> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
> >>
> >> > In terms of API stability/versioning/packaging, once this series is
> >> > reviewed, the plan is to cut a "gfxstream upstream release branch".
> We
> >> > will have the same API guarantees as any other QEMU project then, i.e
> no
> >> > breaking API changes for 5 years.
> >>
> >> What about Rutabaga?
> >
> > Yes, rutabaga + gfxstream will both be versioned and maintain API
> > backwards compatibility in line with QEMU guidelines.
>
> In that case, I should draw your attention to
> <https://crrev.com/c/4584252>, which I've just realised while testing v2
> of your series here breaks the build of the rutabaga ffi, and which will
> require the addition of a "prot" field to struct rutabaga_handle (a
> breaking change). I'll push a new version of that CL to fix rutabaga
> ffi in the next few days.
>
Sorry, I didn't see this until now. At first glance, do we need to modify
the rutabaga_handle? Can't we do fcntl(.., GET_FL) to get the access flags
when needed?
Since this is already coming up, before the release has even been made,
> is it worth exploring how to limit the rutabaga API to avoid more
> breaking changes after the release? Could there be more use of opaque
> structs, for example?
>
> (CCing the crosvm list)
>
[-- Attachment #2: Type: text/html, Size: 2465 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: Rutabaga backwards compatibility
2023-08-05 1:19 ` Gurchetan Singh
@ 2023-08-05 8:47 ` Alyssa Ross
0 siblings, 0 replies; 22+ messages in thread
From: Alyssa Ross @ 2023-08-05 8:47 UTC (permalink / raw)
To: Gurchetan Singh
Cc: qemu-devel, kraxel, marcandre.lureau, akihiko.odaki,
dmitry.osipenko, ray.huang, alex.bennee, shentey, crosvm-dev
[-- Attachment #1: Type: text/plain, Size: 1493 bytes --]
Gurchetan Singh <gurchetansingh@chromium.org> writes:
> On Tue, Aug 1, 2023 at 8:18 AM Alyssa Ross <hi@alyssa.is> wrote:
>
>> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>>
>> > On Mon, Jul 24, 2023 at 2:56 AM Alyssa Ross <hi@alyssa.is> wrote:
>> >>
>> >> Gurchetan Singh <gurchetansingh@chromium.org> writes:
>> >>
>> >> > In terms of API stability/versioning/packaging, once this series is
>> >> > reviewed, the plan is to cut a "gfxstream upstream release branch". We
>> >> > will have the same API guarantees as any other QEMU project then, i.e no
>> >> > breaking API changes for 5 years.
>> >>
>> >> What about Rutabaga?
>> >
>> > Yes, rutabaga + gfxstream will both be versioned and maintain API
>> > backwards compatibility in line with QEMU guidelines.
>>
>> In that case, I should draw your attention to
>> <https://crrev.com/c/4584252>, which I've just realised while testing v2
>> of your series here breaks the build of the rutabaga ffi, and which will
>> require the addition of a "prot" field to struct rutabaga_handle (a
>> breaking change). I'll push a new version of that CL to fix rutabaga
>> ffi in the next few days.
>
> Sorry, I didn't see this until now. At first glance, do we need to modify
> the rutabaga_handle? Can't we do fcntl(.., GET_FL) to get the access flags
> when needed?
That was my original approach[1], but it was difficult to make work on
Windows and not popular.
[1]: https://crrev.com/c/4543310
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 832 bytes --]
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2023-08-05 8:48 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-11 2:56 [PATCH v1 0/9] gfxstream + rutabaga_gfx Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 1/9] virtio: Add shared memory capability Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 2/9] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 3/9] virtio-gpu: hostmem Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 4/9] virtio-gpu: blob prep Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 5/9] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
2023-07-12 11:36 ` Akihiko Odaki
2023-07-11 2:56 ` [PATCH v1 6/9] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
2023-07-12 12:31 ` Akihiko Odaki
2023-07-12 19:14 ` Marc-André Lureau
2023-07-13 1:27 ` Gurchetan Singh
2023-07-15 19:58 ` Bernhard Beschow
2023-07-11 2:56 ` [PATCH v1 7/9] gfxstream + rutabaga: meson support Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 8/9] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
2023-07-11 2:56 ` [PATCH v1 9/9] docs/system: add basic virtio-gpu documentation Gurchetan Singh
2023-07-12 21:40 ` Akihiko Odaki
2023-07-13 1:28 ` Gurchetan Singh
2023-07-24 9:56 ` [PATCH v1 0/9] gfxstream + rutabaga_gfx Alyssa Ross
2023-07-26 1:10 ` Gurchetan Singh
2023-08-01 15:18 ` Rutabaga backwards compatibility Alyssa Ross
2023-08-05 1:19 ` Gurchetan Singh
2023-08-05 8:47 ` Alyssa Ross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).