From: Jason Wang <jasowang@redhat.com>
To: Yuanhan Liu <yuanhan.liu@linux.intel.com>, qemu-devel@nongnu.org
Cc: mst@redhat.com, changchun.ouyang@intel.com
Subject: Re: [Qemu-devel] [PATCH 5/6] vhost-user: add multiple queue support
Date: Mon, 14 Sep 2015 18:06:37 +0800 [thread overview]
Message-ID: <55F69C2D.8030607@redhat.com> (raw)
In-Reply-To: <1442221092-4545-6-git-send-email-yuanhan.liu@linux.intel.com>
On 09/14/2015 04:58 PM, Yuanhan Liu wrote:
> From: Changchun Ouyang <changchun.ouyang@intel.com>
>
> This patch is initially based a patch from Nikolay Nikolaev.
>
> Here is the latest version for adding vhost-user multiple queue support,
> by creating a nc and vhost_net pair for each queue.
>
> What differs from last version is that this patch addresses two major
> concerns from Michael, and fixes one hidden bug.
>
> - Concern #1: no feedback when the backend can't support # of
> requested queues(by providing queues=# option).
>
> Here we address this issue by querying VHOST_USER_PROTOCOL_F_MQ
> protocol features first, if not set, it means the backend don't
> support mq feature, and let max_queues be 1. Otherwise, we send
> another message, VHOST_USER_GET_QUEUE_NUM, for getting the max_queues
> the backend supports.
>
> At vhost-user initiation stage(net_vhost_user_start), we then initiate
> one queue first, which, in the meantime, also gets the max_queues.
> We then do a simple compare: if requested_queues > max_queues, we
> exit(I guess it's safe to exit here, as the vm is not running yet).
>
> - Concern #2: some messages are sent more times than necessary.
>
> We came an agreement with Michael that we could categorize vhost
> user messages to 2 types: none-vring specific messages, which should
> be sent only once, and vring specific messages, which should be sent
> per queue.
>
> Here I introduced a helper function vhost_user_one_time_request(),
> which lists following messages as none-vring specific messages:
>
> VHOST_USER_GET_FEATURES
> VHOST_USER_SET_FEATURES
> VHOST_USER_GET_PROTOCOL_FEATURES
> VHOST_USER_SET_PROTOCOL_FEATURES
> VHOST_USER_SET_OWNER
> VHOST_USER_RESET_DEVICE
> VHOST_USER_SET_MEM_TABLE
> VHOST_USER_GET_QUEUE_NUM
>
> For above messages, we simply ignore them when they are not sent the first
> time.
>
> I also observed a hidden bug from last version. We register the char dev
> event handler N times, which is not necessary, as well as buggy: A later
> register overwrites the former one, as qemu_chr_add_handlers() will not
> chain those handlers, but instead overwrites the old one. So, in theory,
> invoking qemu_chr_add_handlers N times will not end up with calling the
> handler N times.
>
> However, the reason the handler is invoked N times is because we start
> the backend(the connection server) first, and hence when net_vhost_user_init()
> is executed, the connection is already established, and qemu_chr_add_handlers()
> then invoke the handler immediately, which just looks like we invoke the
> handler(net_vhost_user_event) directly from net_vhost_user_init().
>
> The solution I came up with is to make VhostUserState as an upper level
> structure, making it includes N nc and vhost_net pairs:
>
> typedef struct VhostUserNetPair {
> NetClientState *nc;
> VHostNetState *vhost_net;
> } VhostUserNetPair;
>
> typedef struct VhostUserState {
> CharDriverState *chr;
> bool running;
> int queues;
> struct VhostUserNetPair pairs[];
> } VhostUserState;
>
> v8: set net->dev.vq_index correctly inside vhost_net_init() based on the
> value from net->nc->queue_index.
>
> Signed-off-by: Nikolay Nikolaev <n.nikolaev@virtualopensystems.com>
> Signed-off-by: Changchun Ouyang <changchun.ouyang@intel.com>
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> ---
> docs/specs/vhost-user.txt | 13 +++++
> hw/net/vhost_net.c | 5 +-
> hw/virtio/vhost-user.c | 32 ++++++++++-
> include/net/net.h | 1 +
> net/vhost-user.c | 140 +++++++++++++++++++++++++++++++++-------------
> qapi-schema.json | 6 +-
> qemu-options.hx | 5 +-
> 7 files changed, 157 insertions(+), 45 deletions(-)
Saw this version too late. I've some comments for last version and looks
like more of them still apply here. Please see there.
Thanks
next prev parent reply other threads:[~2015-09-14 10:06 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-14 8:58 [Qemu-devel] [PATCH 0/6 v8] vhost-user multiple queue support Yuanhan Liu
2015-09-14 8:58 ` [Qemu-devel] [PATCH 1/6] vhost-user: use VHOST_USER_XXX macro for switch statement Yuanhan Liu
2015-09-14 8:58 ` [Qemu-devel] [PATCH 2/6] vhost-user: add protocol feature negotiation Yuanhan Liu
2015-09-14 8:58 ` [Qemu-devel] [PATCH 3/6] vhost: rename VHOST_RESET_OWNER to VHOST_RESET_DEVICE Yuanhan Liu
2015-09-14 8:58 ` [Qemu-devel] [PATCH 4/6] vhost-user: add VHOST_USER_GET_QUEUE_NUM message Yuanhan Liu
2015-09-14 8:58 ` [Qemu-devel] [PATCH 5/6] vhost-user: add multiple queue support Yuanhan Liu
2015-09-14 10:06 ` Jason Wang [this message]
2015-09-14 8:58 ` [Qemu-devel] [PATCH 6/6] vhost-user: add a new message to disable/enable a specific virt queue Yuanhan Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55F69C2D.8030607@redhat.com \
--to=jasowang@redhat.com \
--cc=changchun.ouyang@intel.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yuanhan.liu@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).