From: Stefan Hajnoczi <stefanha@gmail.com>
To: Gurchetan Singh <gurchetansingh@chromium.org>
Cc: qemu-devel@nongnu.org, pbonzini@redhat.com, philmd@linaro.org,
david@redhat.com, stefanha@redhat.com, kraxel@redhat.com,
marcandre.lureau@redhat.com, akihiko.odaki@gmail.com,
dmitry.osipenko@collabora.com, ray.huang@amd.com,
alex.bennee@linaro.org
Subject: Re: [RFC PATCH 13/13] HACK: schedule fence return on main AIO context
Date: Fri, 21 Apr 2023 11:59:56 -0400 [thread overview]
Message-ID: <CAJSP0QWLOmTG2SSzQwhkSf0++VT5y3NBTnVURuHG=PQi-vss5g@mail.gmail.com> (raw)
In-Reply-To: <20230421011223.718-14-gurchetansingh@chromium.org>
On Thu, 20 Apr 2023 at 21:13, Gurchetan Singh
<gurchetansingh@chromium.org> wrote:
>
> gfxstream and both cross-domain (and even newer versions
> virglrenderer: see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal
> fence completion on threads ("callback threads") that are
> different from the thread that processes the command queue
> ("main thread").
>
> This is generally possible with locking, and this what we do
> in crosvm and other virtio-gpu1.1 implementations. However, on
> QEMU a deadlock is observed if virtio_gpu_ctrl_response_nodata(..)
> [used in the fence callback] is used from a thread that is not the
> main thread.
>
> The reason is the main thread takes the big QEMU lock (bql) somewhere
> when processing the command queue, and virtio_gpu_ctrl_response_nodata(..)
> needs that lock. If you add in the lock needed to protect &g->fenceq
> from concurrent access by the main thread and the callback threads,
> you end can end up with deadlocks.
>
> It's possible to workaround this by scheduling the return of the fence
> descriptors via aio_bh_schedule_oneshot_full(..), but that somewhat
> negates the rationale for the asynchronous callbacks.
>
> I also played around with aio_context_acquire()/aio_context_release(),
> doesn't seem to help.
>
> Is signaling the virtio_queue outside of the main thread possible? If
> so, how?
Yes, if you want a specific thread to process a virtqueue, monitor the
virtqueue's host_notifier (aka ioeventfd) from the thread. That's how
--device virtio-blk,iothread= works. It attaches the host_notifier to
the IOThread's AioContext. Whenever the guest kicks the virtqueue, the
IOThread will respond instead of QEMU's main loop thread.
That said, I don't know the threading model of your code. Could you
explain which threads are involved? Do you want to process all buffers
from virtqueue in a specific thread or only these fence buffers?
>
> Signed-off-by: Gurchetan Singh <gurchetansingh@chromium.org>
> ---
> hw/display/virtio-gpu-rutabaga.c | 29 ++++++++++++++++++++++++++---
> include/hw/virtio/virtio-gpu.h | 1 +
> 2 files changed, 27 insertions(+), 3 deletions(-)
>
> diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
> index 196267aac2..5c296aeef1 100644
> --- a/hw/display/virtio-gpu-rutabaga.c
> +++ b/hw/display/virtio-gpu-rutabaga.c
> @@ -31,6 +31,11 @@ static int virtio_gpu_rutabaga_init(VirtIOGPU *g);
>
> #define CHECK_RESULT(result, cmd) CHECK(result == 0, cmd)
>
> +struct rutabaga_aio_data {
> + struct VirtIOGPUGL *virtio_gpu;
> + struct rutabaga_fence fence;
> +};
> +
> static void
> virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
> uint32_t resource_id)
> @@ -823,10 +828,11 @@ virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
> }
>
> static void
> -virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> - struct rutabaga_fence fence_data)
> +virtio_gpu_rutabaga_aio_cb(void *opaque)
> {
> - VirtIOGPU *g = (VirtIOGPU *)(void*)(uintptr_t)user_data;
> + struct rutabaga_aio_data *data = (struct rutabaga_aio_data *)opaque;
> + VirtIOGPU *g = (VirtIOGPU *)data->virtio_gpu;
> + struct rutabaga_fence fence_data = data->fence;
> struct virtio_gpu_ctrl_command *cmd, *tmp;
>
> bool signaled_ctx_specific = fence_data.flags & RUTABAGA_FLAG_INFO_RING_IDX;
> @@ -856,6 +862,22 @@ virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> QTAILQ_REMOVE(&g->fenceq, cmd, next);
> g_free(cmd);
> }
> +
> + g_free(data);
> +}
> +
> +static void
> +virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
> + struct rutabaga_fence fence_data) {
> + struct rutabaga_aio_data *data;
> + VirtIOGPU *g = (VirtIOGPU *)(void*)(uintptr_t)user_data;
> + GET_VIRTIO_GPU_GL(g);
> +
> + data = g_new0(struct rutabaga_aio_data, 1);
> + data->virtio_gpu = virtio_gpu;
> + data->fence = fence_data;
> + aio_bh_schedule_oneshot_full(virtio_gpu->ctx, virtio_gpu_rutabaga_aio_cb,
> + (void *)data, "aio");
> }
>
> static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
> @@ -912,6 +934,7 @@ static int virtio_gpu_rutabaga_init(VirtIOGPU *g)
> free(channels.channels);
> }
>
> + virtio_gpu->ctx = qemu_get_aio_context();
> return result;
> }
>
> diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
> index 034c71e8f5..b33ad0c68f 100644
> --- a/include/hw/virtio/virtio-gpu.h
> +++ b/include/hw/virtio/virtio-gpu.h
> @@ -225,6 +225,7 @@ struct VirtIOGPUGL {
> char *wayland_socket_path;
> uint32_t num_capsets;
> void *rutabaga;
> + AioContext *ctx;
> };
>
> struct VhostUserGPU {
> --
> 2.40.0.634.g4ca3ef3211-goog
>
>
next prev parent reply other threads:[~2023-04-21 16:00 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-21 1:12 [RFC PATCH 00/13] gfxstream + rutabaga_gfx: a surprising delight or startling epiphany? Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 01/13] virtio: Add shared memory capability Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 02/13] virtio-gpu: hostmem Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 03/13] virtio-gpu blob prep: improve decoding and add memory region Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 04/13] virtio-gpu: CONTEXT_INIT feature Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 05/13] gfxstream + rutabaga prep: virtio_gpu_gl -> virtio_gpu_virgl Gurchetan Singh
2023-04-21 9:40 ` Philippe Mathieu-Daudé
2023-04-21 1:12 ` [RFC PATCH 06/13] gfxstream + rutabaga prep: make GL device more library agnostic Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 07/13] gfxstream + rutabaga prep: define callbacks in realize function Gurchetan Singh
2023-04-21 9:53 ` Philippe Mathieu-Daudé
2023-04-21 1:12 ` [RFC PATCH 08/13] gfxstream + rutabaga prep: added need defintions, fields, and options Gurchetan Singh
2023-04-22 14:54 ` Akihiko Odaki
2023-04-21 1:12 ` [RFC PATCH 09/13] gfxstream + rutabaga: add required meson changes Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 10/13] gfxstream + rutabaga: add initial support for gfxstream Gurchetan Singh
2023-04-22 15:37 ` Akihiko Odaki
2023-04-21 1:12 ` [RFC PATCH 11/13] gfxstream + rutabaga: enable rutabaga Gurchetan Singh
2023-04-21 1:12 ` [RFC PATCH 12/13] HACK: use memory region API to inject memory to guest Gurchetan Singh
2023-04-22 15:46 ` Akihiko Odaki
2023-04-25 0:08 ` Gurchetan Singh
2023-04-22 19:22 ` Peter Maydell
2023-04-21 1:12 ` [RFC PATCH 13/13] HACK: schedule fence return on main AIO context Gurchetan Singh
2023-04-21 15:59 ` Stefan Hajnoczi [this message]
2023-04-21 23:20 ` Gurchetan Singh
2023-04-25 12:04 ` Stefan Hajnoczi
2023-04-21 16:02 ` [RFC PATCH 00/13] gfxstream + rutabaga_gfx: a surprising delight or startling epiphany? Stefan Hajnoczi
2023-04-21 23:54 ` Gurchetan Singh
2023-04-22 16:41 ` Akihiko Odaki
2023-04-25 0:16 ` Gurchetan Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAJSP0QWLOmTG2SSzQwhkSf0++VT5y3NBTnVURuHG=PQi-vss5g@mail.gmail.com' \
--to=stefanha@gmail.com \
--cc=akihiko.odaki@gmail.com \
--cc=alex.bennee@linaro.org \
--cc=david@redhat.com \
--cc=dmitry.osipenko@collabora.com \
--cc=gurchetansingh@chromium.org \
--cc=kraxel@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=ray.huang@amd.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).