qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Coiby Xu <coiby.xu@gmail.com>
Cc: bharatlkmlkvm@gmail.com, qemu-devel@nongnu.org, stefanha@redhat.com
Subject: Re: [PATCH v9 2/5] generic vhost user server
Date: Thu, 18 Jun 2020 15:29:26 +0200	[thread overview]
Message-ID: <20200618132926.GC6012@linux.fritz.box> (raw)
In-Reply-To: <20200614183907.514282-3-coiby.xu@gmail.com>

Am 14.06.2020 um 20:39 hat Coiby Xu geschrieben:
> Sharing QEMU devices via vhost-user protocol.
> 
> Only one vhost-user client can connect to the server one time.
> 
> Signed-off-by: Coiby Xu <coiby.xu@gmail.com>
> ---
>  util/Makefile.objs       |   1 +
>  util/vhost-user-server.c | 400 +++++++++++++++++++++++++++++++++++++++
>  util/vhost-user-server.h |  61 ++++++
>  3 files changed, 462 insertions(+)
>  create mode 100644 util/vhost-user-server.c
>  create mode 100644 util/vhost-user-server.h
> 
> diff --git a/util/Makefile.objs b/util/Makefile.objs
> index cc5e37177a..b4d4af06dc 100644
> --- a/util/Makefile.objs
> +++ b/util/Makefile.objs
> @@ -66,6 +66,7 @@ util-obj-y += hbitmap.o
>  util-obj-y += main-loop.o
>  util-obj-y += nvdimm-utils.o
>  util-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
> +util-obj-$(CONFIG_LINUX) += vhost-user-server.o
>  util-obj-y += qemu-coroutine-sleep.o
>  util-obj-y += qemu-co-shared-resource.o
>  util-obj-y += qemu-sockets.o
> diff --git a/util/vhost-user-server.c b/util/vhost-user-server.c
> new file mode 100644
> index 0000000000..393beeb6b9
> --- /dev/null
> +++ b/util/vhost-user-server.c
> @@ -0,0 +1,400 @@
> +/*
> + * Sharing QEMU devices via vhost-user protocol
> + *
> + * Author: Coiby Xu <coiby.xu@gmail.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later.  See the COPYING file in the top-level directory.
> + */
> +#include "qemu/osdep.h"
> +#include <sys/eventfd.h>
> +#include "qemu/main-loop.h"
> +#include "vhost-user-server.h"
> +
> +static void vmsg_close_fds(VhostUserMsg *vmsg)
> +{
> +    int i;
> +    for (i = 0; i < vmsg->fd_num; i++) {
> +        close(vmsg->fds[i]);
> +    }
> +}
> +
> +static void vmsg_unblock_fds(VhostUserMsg *vmsg)
> +{
> +    int i;
> +    for (i = 0; i < vmsg->fd_num; i++) {
> +        qemu_set_nonblock(vmsg->fds[i]);
> +    }
> +}
> +
> +static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
> +                      gpointer opaque);
> +
> +static void close_client(VuServer *server)
> +{
> +    vu_deinit(&server->vu_dev);
> +    object_unref(OBJECT(server->sioc));
> +    object_unref(OBJECT(server->ioc));
> +    server->sioc_slave = NULL;

Where is sioc_slave closed/freed?

> +    object_unref(OBJECT(server->ioc_slave));
> +    /*
> +     * Set the callback function for network listener so another
> +     * vhost-user client can connect to this server
> +     */
> +    qio_net_listener_set_client_func(server->listener,
> +                                     vu_accept,
> +                                     server,
> +                                     NULL);

If connecting another client to the server should work, don't we have to
set at least server->sioc = NULL so that vu_accept() won't error out?

> +}
> +
> +static void panic_cb(VuDev *vu_dev, const char *buf)
> +{
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +
> +    if (buf) {
> +        error_report("vu_panic: %s", buf);
> +    }
> +
> +    if (server->sioc) {
> +        close_client(server);
> +        server->sioc = NULL;
> +    }
> +
> +    if (server->device_panic_notifier) {
> +        server->device_panic_notifier(server);
> +    }
> +}
> +
> +static QIOChannel *slave_io_channel(VuServer *server, int fd,
> +                                    Error **local_err)
> +{
> +    if (server->sioc_slave) {
> +        if (fd == server->sioc_slave->fd) {
> +            return server->ioc_slave;
> +        }
> +    } else {
> +        server->sioc_slave = qio_channel_socket_new_fd(fd, local_err);
> +        if (!*local_err) {
> +            server->ioc_slave = QIO_CHANNEL(server->sioc_slave);
> +            return server->ioc_slave;
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static bool coroutine_fn
> +vu_message_read(VuDev *vu_dev, int conn_fd, VhostUserMsg *vmsg)
> +{
> +    struct iovec iov = {
> +        .iov_base = (char *)vmsg,
> +        .iov_len = VHOST_USER_HDR_SIZE,
> +    };
> +    int rc, read_bytes = 0;
> +    Error *local_err = NULL;
> +    /*
> +     * Store fds/nfds returned from qio_channel_readv_full into
> +     * temporary variables.
> +     *
> +     * VhostUserMsg is a packed structure, gcc will complain about passing
> +     * pointer to a packed structure member if we pass &VhostUserMsg.fd_num
> +     * and &VhostUserMsg.fds directly when calling qio_channel_readv_full,
> +     * thus two temporary variables nfds and fds are used here.
> +     */
> +    size_t nfds = 0, nfds_t = 0;
> +    int *fds_t = NULL;
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    QIOChannel *ioc = NULL;
> +
> +    if (conn_fd == server->sioc->fd) {
> +        ioc = server->ioc;
> +    } else {
> +        /* Slave communication will also use this function to read msg */
> +        ioc = slave_io_channel(server, conn_fd, &local_err);
> +    }
> +
> +    if (!ioc) {
> +        error_report_err(local_err);
> +        goto fail;
> +    }
> +
> +    assert(qemu_in_coroutine());
> +    do {
> +        /*
> +         * qio_channel_readv_full may have short reads, keeping calling it
> +         * until getting VHOST_USER_HDR_SIZE or 0 bytes in total
> +         */
> +        rc = qio_channel_readv_full(ioc, &iov, 1, &fds_t, &nfds_t, &local_err);
> +        if (rc < 0) {
> +            if (rc == QIO_CHANNEL_ERR_BLOCK) {
> +                qio_channel_yield(ioc, G_IO_IN);
> +                continue;
> +            } else {
> +                error_report_err(local_err);
> +                return false;
> +            }
> +        }
> +        read_bytes += rc;
> +        if (nfds_t > 0) {
> +            if (nfds + nfds_t > G_N_ELEMENTS(vmsg->fds)) {
> +                error_report("A maximum of %d fds are allowed, "
> +                             "however got %lu fds now",
> +                             VHOST_MEMORY_MAX_NREGIONS, nfds + nfds_t);
> +                goto fail;
> +            }
> +            memcpy(vmsg->fds + nfds, fds_t,
> +                   nfds_t *sizeof(vmsg->fds[0]));
> +            nfds += nfds_t;
> +            g_free(fds_t);
> +        }
> +        if (read_bytes == VHOST_USER_HDR_SIZE || rc == 0) {
> +            break;
> +        }
> +        iov.iov_base = (char *)vmsg + read_bytes;
> +        iov.iov_len = VHOST_USER_HDR_SIZE - read_bytes;
> +    } while (true);
> +
> +    vmsg->fd_num = nfds;
> +    /* qio_channel_readv_full will make socket fds blocking, unblock them */
> +    vmsg_unblock_fds(vmsg);
> +    if (vmsg->size > sizeof(vmsg->payload)) {
> +        error_report("Error: too big message request: %d, "
> +                     "size: vmsg->size: %u, "
> +                     "while sizeof(vmsg->payload) = %zu",
> +                     vmsg->request, vmsg->size, sizeof(vmsg->payload));
> +        goto fail;
> +    }
> +
> +    struct iovec iov_payload = {
> +        .iov_base = (char *)&vmsg->payload,
> +        .iov_len = vmsg->size,
> +    };
> +    if (vmsg->size) {
> +        rc = qio_channel_readv_all_eof(ioc, &iov_payload, 1, &local_err);
> +        if (rc == -1) {
> +            error_report_err(local_err);
> +            goto fail;
> +        }
> +    }
> +
> +    return true;
> +
> +fail:
> +    vmsg_close_fds(vmsg);
> +
> +    return false;
> +}
> +
> +
> +static void vu_client_start(VuServer *server);
> +static coroutine_fn void vu_client_trip(void *opaque)
> +{
> +    VuServer *server = opaque;
> +
> +    while (!server->aio_context_changed && server->sioc) {
> +        vu_dispatch(&server->vu_dev);
> +    }
> +
> +    if (server->aio_context_changed && server->sioc) {
> +        server->aio_context_changed = false;
> +        vu_client_start(server);
> +    }
> +}

This is somewhat convoluted, but ok. As soon as my patch "util/async:
Add aio_co_reschedule_self()" is merged, we can use it to simplify this
a bit.

> +static void vu_client_start(VuServer *server)
> +{
> +    server->co_trip = qemu_coroutine_create(vu_client_trip, server);
> +    aio_co_enter(server->ctx, server->co_trip);
> +}
> +
> +/*
> + * a wrapper for vu_kick_cb
> + *
> + * since aio_dispatch can only pass one user data pointer to the
> + * callback function, pack VuDev and pvt into a struct. Then unpack it
> + * and pass them to vu_kick_cb
> + */
> +static void kick_handler(void *opaque)
> +{
> +    KickInfo *kick_info = opaque;
> +    kick_info->cb(kick_info->vu_dev, 0, (void *) kick_info->index);
> +}
> +
> +
> +static void
> +set_watch(VuDev *vu_dev, int fd, int vu_evt,
> +          vu_watch_cb cb, void *pvt)
> +{
> +
> +    VuServer *server = container_of(vu_dev, VuServer, vu_dev);
> +    g_assert(vu_dev);
> +    g_assert(fd >= 0);
> +    long index = (intptr_t) pvt;
> +    g_assert(cb);
> +    KickInfo *kick_info = &server->kick_info[index];
> +    if (!kick_info->cb) {
> +        kick_info->fd = fd;
> +        kick_info->cb = cb;
> +        qemu_set_nonblock(fd);
> +        aio_set_fd_handler(server->ioc->ctx, fd, false, kick_handler,
> +                           NULL, NULL, kick_info);
> +        kick_info->vu_dev = vu_dev;
> +    }
> +}
> +
> +
> +static void remove_watch(VuDev *vu_dev, int fd)
> +{
> +    VuServer *server;
> +    int i;
> +    int index = -1;
> +    g_assert(vu_dev);
> +    g_assert(fd >= 0);
> +
> +    server = container_of(vu_dev, VuServer, vu_dev);
> +    for (i = 0; i < vu_dev->max_queues; i++) {
> +        if (server->kick_info[i].fd == fd) {
> +            index = i;
> +            break;
> +        }
> +    }
> +
> +    if (index == -1) {
> +        return;
> +    }
> +    server->kick_info[i].cb = NULL;
> +    aio_set_fd_handler(server->ioc->ctx, fd, false, NULL, NULL, NULL, NULL);
> +}
> +
> +
> +static void vu_accept(QIONetListener *listener, QIOChannelSocket *sioc,
> +                      gpointer opaque)
> +{
> +    VuServer *server = opaque;
> +
> +    if (server->sioc) {
> +        warn_report("Only one vhost-user client is allowed to "
> +                    "connect the server one time");
> +        return;
> +    }
> +
> +    if (!vu_init(&server->vu_dev, server->max_queues, sioc->fd, panic_cb,
> +                 vu_message_read, set_watch, remove_watch, server->vu_iface)) {
> +        error_report("Failed to initialized libvhost-user");
> +        return;
> +    }
> +
> +    /*
> +     * Unset the callback function for network listener to make another
> +     * vhost-user client keeping waiting until this client disconnects
> +     */
> +    qio_net_listener_set_client_func(server->listener,
> +                                     NULL,
> +                                     NULL,
> +                                     NULL);
> +    server->sioc = sioc;
> +    server->kick_info = g_new0(KickInfo, server->max_queues);
> +    /*
> +     * Increase the object reference, so sioc will not freed by
> +     * qio_net_listener_channel_func which will call object_unref(OBJECT(sioc))
> +     */
> +    object_ref(OBJECT(server->sioc));
> +    qio_channel_set_name(QIO_CHANNEL(sioc), "vhost-user client");
> +    server->ioc = QIO_CHANNEL(sioc);
> +    object_ref(OBJECT(server->ioc));
> +    qio_channel_attach_aio_context(server->ioc, server->ctx);
> +    qio_channel_set_blocking(QIO_CHANNEL(server->sioc), false, NULL);
> +    vu_client_start(server);
> +}
> +
> +
> +void vhost_user_server_stop(VuServer *server)
> +{
> +    if (!server) {
> +        return;
> +    }

There is no reason why the caller should even pass NULL.

> +    if (server->sioc) {
> +        close_client(server);
> +        object_unref(OBJECT(server->sioc));

close_client() already unrefs it. Do we really hold two references? If
so, why?

I can see that vu_accept() takes an extra reference, but the comment
there says this is because QIOChannel takes ownership.

> +    }
> +
> +    if (server->listener) {
> +        qio_net_listener_disconnect(server->listener);
> +        object_unref(OBJECT(server->listener));
> +    }
> +
> +    g_free(server->kick_info);

Don't we need to wait for co_trip to terminate somewhere? Probably
before freeing any objects because it could still use them.

I assume vhost_user_server_stop() is always called from the main thread
whereas co_trip runs in the server AioContext, so extra care is
necessary.

> +}
> +
> +static void detach_context(VuServer *server)
> +{
> +    int i;
> +    AioContext *ctx = server->ioc->ctx;
> +    qio_channel_detach_aio_context(server->ioc);
> +    for (i = 0; i < server->vu_dev.max_queues; i++) {
> +        if (server->kick_info[i].cb) {
> +            aio_set_fd_handler(ctx, server->kick_info[i].fd, false, NULL,
> +                               NULL, NULL, NULL);
> +        }
> +    }
> +}
> +
> +static void attach_context(VuServer *server, AioContext *ctx)
> +{
> +    int i;
> +    qio_channel_attach_aio_context(server->ioc, ctx);
> +    server->aio_context_changed = true;
> +    if (server->co_trip) {
> +        aio_co_schedule(ctx, server->co_trip);
> +    }
> +    for (i = 0; i < server->vu_dev.max_queues; i++) {
> +        if (server->kick_info[i].cb) {
> +            aio_set_fd_handler(ctx, server->kick_info[i].fd, false,
> +                               kick_handler, NULL, NULL,
> +                               &server->kick_info[i]);
> +        }
> +    }
> +}

There is a lot of duplication between detach_context() and
attach_context(). I think implementing this directly in
vhost_user_server_set_aio_context() for both cases at once would result
in simpler code.

> +void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server)
> +{
> +    server->ctx = ctx ? ctx : qemu_get_aio_context();
> +    if (!server->sioc) {
> +        return;
> +    }
> +    if (ctx) {
> +        attach_context(server, ctx);
> +    } else {
> +        detach_context(server);
> +    }
> +}

What happens if the VuServer is already attached to an AioContext and
you change it to another AioContext? Shouldn't it be detached from the
old context and attached to the new one instead of only doing the
latter?

> +
> +bool vhost_user_server_start(VuServer *server,
> +                             SocketAddress *socket_addr,
> +                             AioContext *ctx,
> +                             uint16_t max_queues,
> +                             DevicePanicNotifierFn *device_panic_notifier,
> +                             const VuDevIface *vu_iface,
> +                             Error **errp)
> +{

I think this is the function that is supposed to initialise the VuServer
object, so would it be better to first zero it out completely?

Or alternatively assign it completely like this (which automatically
zeroes any unspecified field):

    *server = (VuServer) {
        .vu_iface       = vu_iface,
        .max_queues     = max_queues,
        ...
    }

> +    server->listener = qio_net_listener_new();
> +    if (qio_net_listener_open_sync(server->listener, socket_addr, 1,
> +                                   errp) < 0) {
> +        return false;
> +    }
> +
> +    qio_net_listener_set_name(server->listener, "vhost-user-backend-listener");
> +
> +    server->vu_iface = vu_iface;
> +    server->max_queues = max_queues;
> +    server->ctx = ctx;
> +    server->device_panic_notifier = device_panic_notifier;
> +    qio_net_listener_set_client_func(server->listener,
> +                                     vu_accept,
> +                                     server,
> +                                     NULL);
> +
> +    return true;
> +}
> diff --git a/util/vhost-user-server.h b/util/vhost-user-server.h
> new file mode 100644
> index 0000000000..5baf58f96a
> --- /dev/null
> +++ b/util/vhost-user-server.h
> @@ -0,0 +1,61 @@
> +/*
> + * Sharing QEMU devices via vhost-user protocol
> + *
> + * Author: Coiby Xu <coiby.xu@gmail.com>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or
> + * later.  See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef VHOST_USER_SERVER_H
> +#define VHOST_USER_SERVER_H
> +
> +#include "contrib/libvhost-user/libvhost-user.h"
> +#include "io/channel-socket.h"
> +#include "io/channel-file.h"
> +#include "io/net-listener.h"
> +#include "qemu/error-report.h"
> +#include "qapi/error.h"
> +#include "standard-headers/linux/virtio_blk.h"
> +
> +typedef struct KickInfo {
> +    VuDev *vu_dev;
> +    int fd; /*kick fd*/
> +    long index; /*queue index*/
> +    vu_watch_cb cb;
> +} KickInfo;
> +
> +typedef struct VuServer {
> +    QIONetListener *listener;
> +    AioContext *ctx;
> +    void (*device_panic_notifier)(struct VuServer *server) ;

Extra space before the semicolon.

> +    int max_queues;
> +    const VuDevIface *vu_iface;
> +    VuDev vu_dev;
> +    QIOChannel *ioc; /* The I/O channel with the client */
> +    QIOChannelSocket *sioc; /* The underlying data channel with the client */
> +    /* IOChannel for fd provided via VHOST_USER_SET_SLAVE_REQ_FD */
> +    QIOChannel *ioc_slave;
> +    QIOChannelSocket *sioc_slave;
> +    Coroutine *co_trip; /* coroutine for processing VhostUserMsg */
> +    KickInfo *kick_info; /* an array with the length of the queue number */

"an array with @max_queues elements"?

> +    /* restart coroutine co_trip if AIOContext is changed */
> +    bool aio_context_changed;
> +} VuServer;
> +
> +
> +typedef void DevicePanicNotifierFn(struct VuServer *server);
> +
> +bool vhost_user_server_start(VuServer *server,
> +                             SocketAddress *unix_socket,
> +                             AioContext *ctx,
> +                             uint16_t max_queues,
> +                             DevicePanicNotifierFn *device_panic_notifier,
> +                             const VuDevIface *vu_iface,
> +                             Error **errp);
> +
> +void vhost_user_server_stop(VuServer *server);
> +
> +void vhost_user_server_set_aio_context(AioContext *ctx, VuServer *server);
> +
> +#endif /* VHOST_USER_SERVER_H */

Kevin



  reply	other threads:[~2020-06-18 13:44 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-14 18:39 [PATCH v9 0/5] vhost-user block device backend implementation Coiby Xu
2020-06-14 18:39 ` [PATCH v9 1/5] Allow vu_message_read to be replaced Coiby Xu
2020-06-18 10:43   ` Kevin Wolf
2020-06-24  3:36     ` Coiby Xu
2020-06-24 12:24       ` Kevin Wolf
2020-06-14 18:39 ` [PATCH v9 2/5] generic vhost user server Coiby Xu
2020-06-18 13:29   ` Kevin Wolf [this message]
2020-08-17  8:59     ` Coiby Xu
2020-06-19 12:00   ` [PATCH 1/6] vhost-user-server: fix VHOST_MEMORY_MAX_REGIONS compiler error Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 2/6] vhost-user-server: drop unused #include <eventfd.h> Stefan Hajnoczi
2020-08-17 12:49       ` Coiby Xu
2020-08-18 15:11         ` Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 3/6] vhost-user-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 4/6] vhost-user-server: mark fd handlers "external" Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 5/6] vhost-user-server: fix s/initialized/initialize/ typo Stefan Hajnoczi
2020-06-19 12:00     ` [PATCH 6/6] vhost-user-server: use DevicePanicNotifierFn everywhere Stefan Hajnoczi
2020-06-19 12:13   ` [PATCH v9 2/5] generic vhost user server Stefan Hajnoczi
2020-08-17  8:24     ` Coiby Xu
2020-06-14 18:39 ` [PATCH v9 3/5] move logical block size check function to a common utility function Coiby Xu
2020-06-18 13:44   ` Kevin Wolf
2020-06-19 12:01   ` [PATCH 1/6] block-helpers: move MIN/MAX_BLOCK_SIZE constants into header file Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 2/6] block-helpers: switch to int64_t block size values Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 3/6] block-helpers: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 4/6] block-helpers: use local_err in case errp is NULL Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 5/6] block-helpers: keep the copyright line from the original file Stefan Hajnoczi
2020-06-19 12:01     ` [PATCH 6/6] block-helpers: update doc comment in gtkdoc style Stefan Hajnoczi
2020-06-14 18:39 ` [PATCH v9 4/5] vhost-user block device backend server Coiby Xu
2020-06-18 15:57   ` Kevin Wolf
2020-08-17 12:30     ` Coiby Xu
2020-06-19 12:03   ` [PATCH 1/2] vhost-user-blk-server: adjust vhost_user_server_set_aio_context() arguments Stefan Hajnoczi
2020-06-19 12:03     ` [PATCH 2/2] vhost-user-blk-server: rename check_logical_block_size() to check_block_size() Stefan Hajnoczi
2020-06-14 18:39 ` [PATCH v9 5/5] new qTest case to test the vhost-user-blk-server Coiby Xu
2020-06-18 15:17   ` Stefan Hajnoczi
2020-06-24  4:35     ` Coiby Xu
2020-06-24 10:49       ` Stefan Hajnoczi
2020-06-24 15:14   ` Thomas Huth
2020-08-17  8:16     ` Coiby Xu
2020-06-14 19:12 ` [PATCH v9 0/5] vhost-user block device backend implementation no-reply
2020-06-14 19:16 ` no-reply
2020-06-16  6:52   ` Coiby Xu
2020-06-18  8:27     ` Stefan Hajnoczi
2020-06-24  4:00       ` Coiby Xu
2020-06-18  8:28     ` Stefan Hajnoczi
2020-08-17  8:23       ` Coiby Xu
2020-06-19 12:07 ` Stefan Hajnoczi
2020-06-24  4:48   ` Coiby Xu
2020-06-25 12:46   ` Coiby Xu
2020-06-26 15:46     ` Stefan Hajnoczi
2020-08-18 15:13 ` Stefan Hajnoczi
2020-09-15 15:35 ` Stefan Hajnoczi
2020-09-18  8:13   ` Coiby Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200618132926.GC6012@linux.fritz.box \
    --to=kwolf@redhat.com \
    --cc=bharatlkmlkvm@gmail.com \
    --cc=coiby.xu@gmail.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).