qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Maxime Coquelin <maxime.coquelin@redhat.com>
Cc: qemu-devel@nongnu.org, mlureau@redhat.com,
	zhengxiang9@huawei.com, lersek@redhat.com, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [PATCH 2/4] vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request
Date: Tue, 16 Jan 2018 05:05:09 +0200	[thread overview]
Message-ID: <20180116050246-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20180112145658.17121-3-maxime.coquelin@redhat.com>

On Fri, Jan 12, 2018 at 03:56:56PM +0100, Maxime Coquelin wrote:
> When the slave cannot add queues dynamically,


Could you please clarify the motivation a bit?
Why is it such a big deal to resize the queue array?
We know no queues are used before all of them are initialized,
so you don't need to worry about synchronizing with threads
processing the queues at the same time.

> it needs to know for
> how many queues to wait to be initialized.
> 
> This patch introduce new vhost-user protocol feature & request for
> the master to send the number of queue pairs allocated by the
> driver.

Assuming we can fix the previous message, I think specifying
the # of queues would be better.


> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  docs/interop/vhost-user.txt       | 16 ++++++++++++++++
>  hw/virtio/vhost-user.c            | 24 ++++++++++++++++++++++++
>  include/hw/virtio/vhost-backend.h |  3 +++
>  3 files changed, 43 insertions(+)
> 
> diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt
> index 8a14191a1e..85c0e03a95 100644
> --- a/docs/interop/vhost-user.txt
> +++ b/docs/interop/vhost-user.txt
> @@ -218,6 +218,9 @@ The max number of queue pairs the slave supports can be queried with message
>  VHOST_USER_GET_QUEUE_NUM. Master should stop when the number of
>  requested queues is bigger than that.
>  
> +When VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM is negotiated, the master must send
> +the number of queue pairs initialized with message VHOST_USER_SET_QUEUE_NUM.
> +
>  As all queues share one connection, the master uses a unique index for each
>  queue in the sent message to identify a specified queue. One queue pair
>  is enabled initially. More queues are enabled dynamically, by sending
> @@ -354,6 +357,7 @@ Protocol features
>  #define VHOST_USER_PROTOCOL_F_MTU            4
>  #define VHOST_USER_PROTOCOL_F_SLAVE_REQ      5
>  #define VHOST_USER_PROTOCOL_F_CROSS_ENDIAN   6
> +#define VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM  7
>  
>  Master message types
>  --------------------
> @@ -623,6 +627,18 @@ Master message types
>        and expect this message once (per VQ) during device configuration
>        (ie. before the master starts the VQ).
>  
> + * VHOST_USER_SET_QUEUE_NUM
> +
> +      Id: 24
> +      Equivalent ioctl: N/A
> +      Master payload: u64
> +
> +      Set the number of queue pairs initialized.
> +      Master sends such request to notify the slave the number of queue pairs
> +      that have been initialized.
> +      This request should only be sent if VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM
> +      feature has been successfully negotiated.
> +
>  Slave message types
>  -------------------
>  
> diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
> index 093675ed98..9e7728d2da 100644
> --- a/hw/virtio/vhost-user.c
> +++ b/hw/virtio/vhost-user.c
> @@ -34,6 +34,7 @@ enum VhostUserProtocolFeature {
>      VHOST_USER_PROTOCOL_F_NET_MTU = 4,
>      VHOST_USER_PROTOCOL_F_SLAVE_REQ = 5,
>      VHOST_USER_PROTOCOL_F_CROSS_ENDIAN = 6,
> +    VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM = 7,
>  
>      VHOST_USER_PROTOCOL_F_MAX
>  };
> @@ -65,6 +66,7 @@ typedef enum VhostUserRequest {
>      VHOST_USER_SET_SLAVE_REQ_FD = 21,
>      VHOST_USER_IOTLB_MSG = 22,
>      VHOST_USER_SET_VRING_ENDIAN = 23,
> +    VHOST_USER_SET_QUEUE_NUM = 24,
>      VHOST_USER_MAX
>  } VhostUserRequest;
>  
> @@ -922,6 +924,27 @@ static void vhost_user_set_iotlb_callback(struct vhost_dev *dev, int enabled)
>      /* No-op as the receive channel is not dedicated to IOTLB messages. */
>  }
>  
> +static int vhost_user_set_queue_num(struct vhost_dev *dev, uint64_t queues)
> +{
> +    VhostUserMsg msg = {
> +        .request = VHOST_USER_SET_QUEUE_NUM,
> +        .size = sizeof(msg.payload.u64),
> +        .flags = VHOST_USER_VERSION,
> +        .payload.u64 = queues,
> +    };
> +
> +    if (!(dev->protocol_features &
> +                (1ULL << VHOST_USER_PROTOCOL_F_SET_QUEUE_NUM))) {
> +        return 0;
> +    }
> +
> +    if (vhost_user_write(dev, &msg, NULL, 0) < 0) {
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
>  const VhostOps user_ops = {
>          .backend_type = VHOST_BACKEND_TYPE_USER,
>          .vhost_backend_init = vhost_user_init,
> @@ -948,4 +971,5 @@ const VhostOps user_ops = {
>          .vhost_net_set_mtu = vhost_user_net_set_mtu,
>          .vhost_set_iotlb_callback = vhost_user_set_iotlb_callback,
>          .vhost_send_device_iotlb_msg = vhost_user_send_device_iotlb_msg,
> +        .vhost_set_queue_num = vhost_user_set_queue_num,
>  };
> diff --git a/include/hw/virtio/vhost-backend.h b/include/hw/virtio/vhost-backend.h
> index a7a5f22bc6..1dd3e4bbf3 100644
> --- a/include/hw/virtio/vhost-backend.h
> +++ b/include/hw/virtio/vhost-backend.h
> @@ -84,6 +84,8 @@ typedef void (*vhost_set_iotlb_callback_op)(struct vhost_dev *dev,
>                                             int enabled);
>  typedef int (*vhost_send_device_iotlb_msg_op)(struct vhost_dev *dev,
>                                                struct vhost_iotlb_msg *imsg);
> +typedef int (*vhost_set_queue_num_op)(struct vhost_dev *dev,
> +                                      uint64_t queues);
>  
>  typedef struct VhostOps {
>      VhostBackendType backend_type;
> @@ -118,6 +120,7 @@ typedef struct VhostOps {
>      vhost_vsock_set_running_op vhost_vsock_set_running;
>      vhost_set_iotlb_callback_op vhost_set_iotlb_callback;
>      vhost_send_device_iotlb_msg_op vhost_send_device_iotlb_msg;
> +    vhost_set_queue_num_op vhost_set_queue_num;
>  } VhostOps;
>  
>  extern const VhostOps user_ops;
> -- 
> 2.14.3

  reply	other threads:[~2018-01-16  3:05 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-12 14:56 [Qemu-devel] [PATCH 0/4] vhost-user: notify backend with number of queues setup Maxime Coquelin
2018-01-12 14:56 ` [Qemu-devel] [PATCH 1/4] vhost-user: fix multiple queue specification Maxime Coquelin
2018-01-16  3:00   ` Michael S. Tsirkin
2018-01-16 12:35     ` Maxime Coquelin
2018-01-12 14:56 ` [Qemu-devel] [PATCH 2/4] vhost-user: specify and implement VHOST_USER_SET_QUEUE_NUM request Maxime Coquelin
2018-01-16  3:05   ` Michael S. Tsirkin [this message]
2018-01-16 14:56     ` Maxime Coquelin
2018-01-12 14:56 ` [Qemu-devel] [PATCH 3/4] vhost-net: add vhost_net_set_queue_num helper Maxime Coquelin
2018-01-12 14:56 ` [Qemu-devel] [PATCH 4/4] virtio-net: notify backend with number of queue pairs setup Maxime Coquelin
2018-01-16  3:07   ` Michael S. Tsirkin
2018-01-16 14:16     ` Maxime Coquelin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180116050246-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=lersek@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mlureau@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=zhengxiang9@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).