From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60986) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XxupJ-0003YV-0q for qemu-devel@nongnu.org; Mon, 08 Dec 2014 04:42:01 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XxupC-0003z6-2i for qemu-devel@nongnu.org; Mon, 08 Dec 2014 04:41:56 -0500 Received: from mx1.redhat.com ([209.132.183.28]:47785) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XxupB-0003yk-RO for qemu-devel@nongnu.org; Mon, 08 Dec 2014 04:41:50 -0500 Date: Mon, 8 Dec 2014 11:41:41 +0200 From: "Michael S. Tsirkin" Message-ID: <20141208094141.GA19611@redhat.com> References: <20141206165241.4064.61867.stgit@i3820> <548569A5.30402@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <548569A5.30402@6wind.com> Subject: Re: [Qemu-devel] [PATCH] vhost-user: multiqueue support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Olivier MATZ Cc: snabb-devel@googlegroups.com, qemu-devel@nongnu.org, Nikolay Nikolaev , thomas.long@intel.com, tech@virtualopensystems.com On Mon, Dec 08, 2014 at 10:04:37AM +0100, Olivier MATZ wrote: > Hi Nikolay, > > On 12/06/2014 05:52 PM, Nikolay Nikolaev wrote: > > Vhost-user will implement the multiqueueu support in a similar way to what > > vhost already has - a separate thread for each queue. > > > > To enable multiquue funcionality - a new command line parameter > > "queues" is introduced for the vhost-user netdev. > > > > Signed-off-by: Nikolay Nikolaev > > --- > > docs/specs/vhost-user.txt | 7 +++++++ > > hw/virtio/vhost-user.c | 6 +++++- > > net/vhost-user.c | 35 +++++++++++++++++++++++------------ > > qapi-schema.json | 5 ++++- > > qemu-options.hx | 5 +++-- > > 5 files changed, 42 insertions(+), 16 deletions(-) > > [...] > > --- a/net/vhost-user.c > > +++ b/net/vhost-user.c > > @@ -134,25 +134,27 @@ static void net_vhost_user_event(void *opaque, int event) > > > > static int net_vhost_user_init(NetClientState *peer, const char *device, > > const char *name, CharDriverState *chr, > > - bool vhostforce) > > + bool vhostforce, uint32_t queues) > > { > > NetClientState *nc; > > VhostUserState *s; > > + int i; > > > > - nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name); > > + for (i = 0; i < queues; i++) { > > + nc = qemu_new_net_client(&net_vhost_user_info, peer, device, name); > > > > - snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user to %s", > > - chr->label); > > + snprintf(nc->info_str, sizeof(nc->info_str), "vhost-user%d to %s", > > + i, chr->label); > > > > Now that there several vhost-user are pointing to the same unix socket, > it could make sense to display "nc->info_str" instead of "s->chr->label" > in net_vhost_user_event(). Something like that: > > --- a/net/vhost-user.c > +++ b/net/vhost-user.c > @@ -122,36 +122,39 @@ static void net_vhost_user_event(void *opaque, int > event) > case CHR_EVENT_OPENED: > vhost_user_start(s); > net_vhost_link_down(s, false); > - error_report("chardev \"%s\" went up\n", s->chr->label); > + error_report("chardev \"%s\" went up\n", s->nc.info_str); > break; > case CHR_EVENT_CLOSED: > net_vhost_link_down(s, true); > vhost_user_stop(s); > - error_report("chardev \"%s\" went down\n", s->chr->label); > + error_report("chardev \"%s\" went down\n", s->nc.info_str); > break; > } > } > > > Also, another comment: if I understand well, the messages like > VHOST_USER_SET_OWNER, VHOST_USER_SET_FEATURES, VHOST_SET_MEM_TABLE, > (...) will be send once per queue pair and not once per device. One wonders why that's necessary. > I don't think it's a problem, but maybe it deserves a small comment > in the protocol documentation. > > > Apart from these 2 small comments, the approach looks correct, so > Acked-by: Olivier Matz > > > Regards, > Olivier